Unnamed: 0
int64
0
31.6k
Clean_Title
stringlengths
7
376
Clean_Text
stringlengths
1.85k
288k
Clean_Summary
stringlengths
215
5.34k
400
Cord blood IgG and the risk of severe Plasmodium falciparum malaria in the first year of life
Plasmodium falciparum is a leading cause of childhood morbidity and mortality with approximately 214 million cases and 438,000 deaths reported globally in the year 2015.A disproportionate number of the malaria-related deaths occur in sub-Saharan Africa with children under the age of 5 years being at the highest risk of severe and life-threatening malaria.Severe malaria in children manifests in three overlapping clinical syndromes: severe anemia, impaired consciousness and respiratory distress.The presentation of these clinical features varies with host age and the level of malaria transmission.In high transmission areas, severe anemia is predominant and affects mainly children aged less than 24 months, while in low-moderate transmission areas cerebral malaria is the main clinical manifestation in older children, causing high mortality despite appropriate intervention.A significant proportion of those who recover develop long-term neurological and cognitive deficits.Young infants in malaria endemic countries are relatively resistant to severe malaria.Cord blood antibodies are thought to confer protection against clinical episodes of malaria but the evidence is far from clear.Although passively transferred cord blood IgG was shown to reduce parasitemia and clinical symptoms in one study, the targets of such antibodies have yet to be identified.Importantly, although many studies have investigated maternal antibodies in relation to the risk of infection, clinical or febrile malaria, none have focused on severe malaria as the endpoint of interest.We designed a case-control study of severe malaria nested within a longitudinal birth cohort of infants who were monitored for episodes of well-characterised severe malaria.We identified the sub-group of infants for whom a cord blood sample was available.We measured cord blood plasma total IgG levels against five recombinant P. falciparum merozoite antigens and its functional activity in the growth inhibition activity and antibody-dependent respiratory burst assays.We investigated factors that were likely to have an influence on these antibody measures and assessed the decay of antigen-specific cord blood IgG over the first 6 months of life.Finally we investigated whether antibody levels and function in cord blood were associated with reduced odds of developing severe falciparum malaria at different time points during the first year of life when maternal antibodies are likely to persist.The study was conducted in Kilifi County, on the Kenyan coast.The area experiences two seasonal peaks in malaria transmission.The study setting and study population are described in detail elsewhere.Briefly, following informed consent, infants born to mothers who delivered at Kilifi County Hospital or those attending the immunisation clinic during the first month of life were recruited into a birth cohort set up between 2001 and 2008 to study the risk factors of invasive pneumococcal disease in young children.As the study was primarily set up to study pneumococcal disease, malaria-specific indices such as intermittent preventive treatment during pregnancy and bed net usage were not recorded.The children were followed up quarterly at the Outpatient Department of KCH until 2 years of age.During the quarterly visits, a blood sample was collected and thick and thin blood smears prepared for detection of parasites by microscopy.In the event of an illness outside the scheduled 3-monthly visits, parents were advised to seek care at KCH and the children were treated according to national guidelines.Children who were admitted to hospital were identified using a unique number that linked their clinical, demographic and laboratory information.We designed a matched case-control study of well-defined severe malaria cases that included all infants enrolled into the KBC and longitudinally monitored as described in Section 2.1.We included cases admitted to hospital between April 2002 and January 2010.Cases were individually matched to a maximum of three controls by age, date of sample collection and area of residence.Controls were selected from KBC participants who did not present to KCH with severe malaria during the 8-year monitoring period.A total of 61 severe malaria cases were identified and these were individually matched to 161 controls.The data presented here are drawn from the subset of these children who were recruited at birth and had a 2 ml venous blood sample taken from the umbilical vein.Following informed consent, baseline information and a cord blood sample were obtained.We also analyzed samples collected at 3 and 6 months of age from the cases and controls to determine the dynamics of decay of maternal antibodies.Inclusion criteria for severe malaria cases were admission to hospital between April 2002 and January 2010 with detectable parasites by microscopy and one of the following symptoms: impaired consciousness, chest indrawing or deep breathing or severe anemia.Detection of malaria parasites in the samples collected every 3 months was performed retrospectively by microscopy and PCR as previously described.Briefly, thick and thin blood films were stained with Giemsa and examined by light microscopy.Parasite densities were determined as the number of parasites per 8,000 white blood cells per μl of blood.The prevalence of submicroscopic infections was determined by PCR amplification of the polymorphic block 3 region of the merozoite surface protein 2 gene followed by capillary electrophoresis.We measured total IgG titres to a panel of five recombinant merozoite antigens that are currently being assessed in clinical, pre-clinical, animal model and in vitro studies as potential blood-stage malaria vaccine candidates.Reactivity to schizont extract was used as a marker of previous exposure to infection.Full-length apical membrane antigen1 was expressed as a Histidine-tagged protein in Pichia pastoris, MSP-2 was expressed as a glutathione S-transferase-fusion protein in Escherichia coli and MSP-3 as a maltose binding protein-fusion protein also in E. coli.The C-terminal 19 kDa fragment of MSP-1 and a fragment of P. falciparum reticulocyte-binding homolog 2) were expressed as GST- and His-tagged fusion proteins, respectively, in E. coli.A P. falciparum schizont lysate based on the A4 parasite line was prepared by sonicating mature schizont stages.Total IgG responses against the P. falciparum merozoite antigens described in Section 2.2 were simultaneously measured by multiplex ELISA as described previously.We also measured IgG to parasite schizont lysate using a standard ELISA protocol.Eleven serial dilutions of a purified IgG preparation obtained from Malawian adults were included for every antigen tested to obtain a standard dilution curve that allowed the conversion of median fluorescence intensity readings to arbitrary antibody concentrations.A pool of plasma obtained from Kilifi adults was included in a single well on each plate as a positive control to allow for standardisation of day-to-day and plate-to-plate variation.Twenty plasma samples obtained from UK adults who had not been exposed to malaria were also included as negative controls for each antigen tested.Seropositivity for antibody titres was defined as ELISA O.D. value above the mean + 3 S.D. of the 20 malaria non-exposed UK plasma samples.All samples were assayed in duplicate and for those that had a coefficient of variation greater than 20%, the assays were repeated.The assays of GIA and ADRB activity were performed as previously described using cord blood plasma samples.The GIA assay has been used to assess the vaccine efficacy of blood-stage vaccine candidates both in animal models and clinical studies.The assay has also been associated with protection from clinical malaria in some studies but this has not been a consistent finding.The ADRB assay has also been shown to correlate with protection against clinical episodes of malaria in field studies.Cord plasma were dialyzed in 1 X PBS using 20 kDa MWCO mini dialysis units and incubated at 56 °C for 30 min to inactivate complement proteins.Highly synchronous trophozoite stage parasites from the 3D7 P. falciparum strain were added to individual wells, followed by the dialyzed plasma at a ratio of 1:10.The plates were incubated in a humidified chamber containing 5% CO2, 5% 02 and 90% N2 for 80 h. Ten microliters of culture medium were added to each well after the first growth cycle.A positive and negative control well containing 10 mg/ml of purified Malaria Immune Globulin and a pool of plasma from UK adults, respectively, were included.After two cycles, the parasites were stained with 10 μg/ml of ethidium bromide for 30 min, washed with 1 X PBS and acquired on an FC500 flow cytometer.Plasmodium falciparum 3D7 parasite cultures were maintained at <10% parasitemia and 2% hematocrit.Highly synchronous mature trophozoite stages were enriched by magnetic separation and allowed to mature into early schizont stages.Thereafter, a protease inhibitor butane) was added to allow development into late schizonts without rupture.The schizonts were pelleted, resuspended in 1 X PBS and stored at −20 °C.Whole blood samples from healthy donors were collected into heparin tubes, layered onto Ficoll-Histopaque 1077 and centrifuged at 600g for 15 min at room temperature.The pellet containing PMNs and red blood cells was resuspended in 3% dextran solution and incubated for 30 min at RT in the dark.Thereafter, the supernatant was carefully removed and centrifuged at 500g for 7 min.Residual RBCs were lysed using ice cold 0.2% NaCl followed by 1.6% NaCl.PMNs were washed in PMN buffer supplemented with 0.1% BSA and 1% D--glucose) and resuspended in PMN buffer at a concentration of 1x107 PMNs/ml.The ADRB assay was performed as previously described.Briefly, PEMS were thawed and coated overnight onto individual wells of Nunc opaque MaxiSorp 96-well plates at 18.5 × 105/ml.Following three washes with 1 X PBS, the plates were blocked and incubated with plasma for 1 h at 37 °C.The plates were washed and 50 μl of PMNs added, followed by 50 μl of isoluminol.Chemiluminescence was measured for 1 s every 2 min over an hour.Control wells containing a pool of UK adult plasma and a pool of plasma from Kilifi adults were included in each plate.Readings were expressed relative to those obtained from the pool of plasma from Kilifi adults.Of the 222 children recruited to the matched case-control study of severe malaria described previously, 130/222 were born at KCH and had a 2 ml cord blood sample drawn at birth.Of these, 32 developed severe malaria; 12, seven and six of whom presented to hospital with respiratory distress, impaired consciousness and severe malaria anemia, respectively.Six infants presented with two overlapping severe malaria syndromes and one with all three syndromes.Of the 32 cases, five occurred during the first 6 months of life, 12 within 9 months and 16 before 12 months.The remaining 16 cases occurred beyond the age of 1 year.No cases occurred in the first 4 months of life.The median age of admission with severe malaria was 12.5 months.The remaining 98/130 children made up the controls, 20 of whom had a history of admission to hospital with gastroenteritis and lower respiratory tract infections.None of the controls was admitted with non-severe malaria.Twenty-seven out of 130 acquired asymptomatic P. falciparum infections as measured by PCR or microscopy in the quarterly samples collected up to 2 years of age.Of these, 11/32 were identified among the children who subsequently developed severe malaria and 16/98 among the controls.There was no significant difference in cord blood seroprevalence or levels of antibodies between the cases of severe malaria and controls for all antigens.Among the cases, the seroprevalence against schizont lysate, AMA1, MSP-2, MSP-3, MSP-119 and PfRh2 was 98%, 93%, 87% 69%, 46% and 33%, respectively, compared with 100%, 93%, 90%, 68%, 43% and 37% among the controls.The median GIA level was 28.9% with no statistically significant difference between the cases and controls.Similarly, the median ADRB level was 0.4 indexed relative light units and the median levels were comparable among the cases and controls.We previously published threshold concentrations of antibodies to specific antigens that appeared to be necessary for protection against clinical episodes of malaria.The prevalence of antibodies at threshold concentrations in cord blood was low and importantly did not differ between severe malaria cases and controls; AMA1, 6.2% versus 6.1%; MSP-2, 21.8% versus 18.3% and MSP-3, 18.7% versus 17.7%, respectively.We used a linear regression model to determine which factors were positively or negatively correlated with increasing antibody levels and function in cord blood plasma.Maternal age was positively associated with higher GIA and ADRB levels, P = 0.03 and 0.01, P = 0.004, respectively) and but not with antibodies to individual antigens.However, this result remained significant for ADRB levels only after adjustment for multiple comparisons.Surprisingly, we observed an increase in GIA and ADRB levels over the duration of recruitment, a period marked by continuous decline in malaria transmission intensity.Other factors such birth weight, parity, gestation period, season and gender of the child did not significantly influence the levels of specific antibodies measured in cord blood or their functional activity.The decay rate of antibodies to specific antigens was determined using a longitudinal mixed-effects model.A regression line was fitted through log10 transformed antibody titres for both cases and controls aged less than 6 months.All the index samples were collected prior to the severe malaria episode.To assess the decay of antibodies in the absence of boosting of P. falciparum infection, we excluded results from five cases and five controls who had detectable parasites at either month 3 or 6.In addition, all samples from individuals whose antibody titres at 3 months of age were higher than cord titres were excluded and for those whose titres at 6 months were higher than month 3, only the result for the 6-monthly sample was excluded.A total of 56 samples were excluded in the analysis.The mean half-life range was 2.51 months: 2.19–2.92) to 4.91 months.There was no significant difference in the mean decay rate of antibodies against AMA1, MSP-2 and MSP-3.However, anti-MSP-119 and PfRh2 antibodies had a significantly longer mean half-life compared with other antigens tested and 4.91 months.The rate of decay of antibodies did not differ between infants who subsequently became cases and those that were controls although the numbers in each group were limited and precluded the ability to detect any significant differences.Next, we tested the rates of decay of antibodies according to the quartile distribution of cord titres.The titres were divided into quartiles and the decay rate of antibodies within each quartile assessed as shown in Fig. 3.Titres in the highest quartile decayed most rapidly, followed by those in the 3rd, 2nd and 1st quartiles, in that order for all antigens tested with the exception of AMA1.For example, the decay rate for MSP-2 antibodies was 0.14, 0.22, 0.23 and 0.31 log10 AU reduction per month in the 1st, 2nd, 3rd and 4th cord quartiles, respectively.The average rate of decay and 95% CI estimates across the different quartiles for all antigens tested are shown in Fig. 3F–J.We evaluated the relationship between cord antibody titres and their functional activity with the odds of developing severe malaria at different time points after birth.Although the median age of admission with severe malaria was 12.5 months, we excluded the cases of severe malaria that occurred beyond 12 months, as it was highly unlikely that cord blood antibodies were responsible for protection.Thus, we analysed samples from a total of 16 severe cases admitted to hospital during the first 12 months of life.These were individually matched to four controls per case based on date of birth.Three cases were born a few days apart and were therefore matched to a similar set of four controls.Seropositivity for antibody titres and ADRB were defined as a cut-off above the mean + 3 S.D. of 20 European plasma samples whereas the GIA cut-off for positivity was defined as being above the median GIA level of the cord plasma samples.Of the 130 children who had a cord plasma sample, 26/32 and 16/32 severe malaria cases were seropositive for ADRB and GIA, respectively.Of the 16 severe malaria cases admitted to hospital during the first 12 months of life, 2/5, 8/12 and 12/16 were seropositive for ADRB during the first 6, 9 and 12 months of life, respectively, compared with 18/20, 43/47 and 50/55 of controls.On the other hand, 3/5, 8/12 and 9/16 of cases and 12/18, 29/44 and 36/59 of controls had antibodies above the median GIA levels during the first 6, 9 and 12 months of life, respectively.There was a positive and significant correlation between cord blood antibodies to all the antigens tested in both the severe malaria cases and controls with the exception of antibodies against MSP-2 and PfRh2 and MSP-3 and PfRh2 among the controls but not cases.Assays of GIA and ADRB activity were significantly correlated among the controls but not the cases.There was only limited co-linearity between the different indices of immunity that we measured.We observed a significant increase in ADRB levels but not GIA activity with increase in the breadth of merozoite-specific cord blood antibodies.Antibody titres to all antigens tested were not associated with protection against severe malaria at the different time points after birth.Children who were positive for ADRB had significantly reduced odds of developing severe malaria during the first 6 months of life P = 0.007) and this association remained strong and significant until 9 months of age but was lost thereafter.In contrast, GIA was not associated with protection at any time point.We have previously demonstrated that children who had antibodies that were capable of inhibiting growth of parasites and mediating release of reactive oxygen species by neutrophils had significantly reduced odds of developing severe malaria during the first year of life.Here, a combination of the two functional assays was not associated with protection against severe malaria.We designed a case control study of severe malaria nested within a longitudinally monitored birth cohort.This design provided a unique opportunity to determine whether antibodies present in cord blood were associated with a reduced risk of severe malaria during infancy.We found that cord blood IgG in the assay of ADRB activity was significantly associated with lower odds of developing severe malaria.This association was only observed for severe malaria cases occurring within 9 months of birth and fits expectations of the half-life of passively transferred maternal antibodies.Cord blood IgG against merozoite antigens tested was not associated with protection and the half-life ranged from between 2.5 to 4.9 months for the antigens tested.Interestingly, antibody decay rates were inversely proportional to the initial titres present in cord blood.The ADRB assay has been assessed in a limited number of studies and has been shown to correlate with protection against clinical episodes of malaria in some of these investigations.No studies have evaluated the effect of cord blood antibodies capable of inducing ADRB with subsequent risk of malaria in early infancy and we propose that our study is unique in this respect.The strong correlation between ADRB levels and breadth of responses to merozoite antigens suggest that multiple targets mediate the overall ADRB activity.On the other hand, the weak correlation between GIA and ADRB highlight the distinct mechanisms of actions measured by these assays and suggest that the merozoite targets may be different or antibodies to the targets that mediate these mechanisms were absent.GIA was not associated with protection against severe malaria.Our study differs from previously published reports that investigated the protective role of maternal antibodies against the outcomes of infection and/or clinical or febrile malaria.None of the previous studies focused on severe malaria as an endpoint.Furthermore, the duration of observation in previous analyses has varied ranging from between 5 months to 2 years.Our present analysis supports the view that maternal antibodies against merozoite antigens are unlikely to persist at high titres beyond 5 months and that this relatively short period during which antibodies are actually available may account for the inconsistency in findings from studies investigating the protective role of maternal antibodies.Importantly, whether the end-point of such studies is severe or uncomplicated malaria, clinical episodes as a whole are relatively rare in children under 6 months of age despite the fact that they frequently harbor low-level asymptomatic infections.Thus whilst antibodies may play a protective role, the lack of sufficient cases during this period limits the analysis and will require very large studies.A key and interesting finding in our study was that antibody decay rates are inversely proportional to the initial titres present in cord blood, a finding that is in contrast to a study by Riley et al. showing that cord blood antibody titres in Ghanaian infants persisted for a long duration in infants who had antibody levels above the median O.D. level at birth compared with those below the median.However, results similar to ours have been reported from studies of viral infections that demonstrated a rapid postnatal decline of maternally transferred antibodies against rubella, parainfluenza type 3 and influenza A2 in children with high titres at birth compared with infants with low initial titres.IgG sub-class antibodies against merozoite antigens have also been reported to decay faster than their expected theoretical half-life in older children.We speculate that the rate of decay of maternal antibodies may have been influenced by the presence of sub-patent infections.Moderate to high maternal antibody titres may have a masking effect on infections in infants and this could contribute to the rapid decay of circulating antibodies.On the other hand, infections in the presence of low titre maternal antibodies could result in seroconversion and persistence of antibody titres.Detection of asymptomatic infections in our study was not comprehensive as sampling was undertaken only once every 3 months.The distribution of different IgG subtypes could also affect the rate of antibody decline.For instance, IgG3 is more rapidly degraded than other isotypes.In other infections, factors such as nutritional status of the infant, breastfeeding, environmental factors and the presence of other concurrent infections during infancy have not been shown to influence the kinetics of antibody decay but their effect on malaria-specific maternal antibodies is not known.We found that anti-merozoite antibodies in cord blood were not associated with parity, maternal age, birth weight or gestational age.In contrast, other studies demonstrated a positive relationship between parity and placental malaria infection with increased levels of total IgG to var2CSA and merozoite antigens.Multiple factors are thought to influence the transplacental transfer of antibodies and may account for this disparity.These include maternal factors such as differences in maternal titres which vary depending on the antigen, IgG subclass distribution, nature of antigen tested, timing of exposure to antigens during pregnancy and the duration of this exposure.Varying analytical approaches are also applied to the study of maternal antibodies; some studies use cord blood antibody levels as a proxy measure of the efficacy of transplacental antibody transfer while others consider the ratio of antibodies in the mother to infant.We did not have data on placental malaria, HIV infection, IPTp or use of insecticide-treated bed nets, all of which may have an impact on antibody levels.In summary, we found that cord blood IgG activity in the ADRB assay was strongly associated with lower odds of developing severe malaria in the first 9 months of life.Although larger studies are needed, these data suggest that ADRB could be useful for the identification of targets of protective antibodies that could be translated to the clinic as candidate vaccines for infants and young children who are most susceptible to death due to severe malaria.
Young infants are less susceptible to severe episodes of malaria but the targets and mechanisms of protection are not clear. Cord blood antibodies may play an important role in mediating protection but many studies have examined their association with the outcome of infection or non-severe malaria. Here, we investigated whether cord blood IgG to Plasmodium falciparum merozoite antigens and antibody-mediated effector functions were associated with reduced odds of developing severe malaria at different time points during the first year of life. We conducted a case-control study of well-defined severe falciparum malaria nested within a longitudinal birth cohort of Kenyan children. We measured cord blood total IgG levels against five recombinant merozoite antigens and antibody function in the growth inhibition activity and neutrophil antibody-dependent respiratory burst assays. We also assessed the decay of maternal antibodies during the first 6 months of life. The mean antibody half-life range was 2.51 months (95% confidence interval (CI): 2.19–2.92) to 4.91 months (95% CI: 4.47–6.07). The rate of decline of maternal antibodies was inversely proportional to the starting concentration. The functional assay of antibody-dependent respiratory burst activity predicted significantly reduced odds of developing severe malaria during the first 6 months of life (Odds ratio (OR) 0.07, 95% CI: 0.007–0.74, P = 0.007). Identification of the targets of antibodies mediating antibody-dependent respiratory burst activity could contribute to the development of malaria vaccines that protect against severe episodes of malaria in early infancy.
401
Comparison of 24-hour Holter monitoring with 14-day novel adhesive patch electrocardiographic monitoring
The Scripps Institutional Review Board approved the protocol, and all patients enrolled gave informed consent to participate.Between April 2012 and July 2012, patients referred to the cardiac investigations laboratory at Scripps Green Hospital for ambulatory ECG monitoring were fitted with an adhesive patch monitor and a 24-hour Holter monitor.Both devices were activated simultaneously.Patients were enrolled prospectively in a consecutive fashion on the basis of appropriate eligibility criteria.Inclusion criteria included an age of 18 years or older and being under evaluation for cardiac arrhythmia, capable of providing informed consent, and able to comply with continuous ECG monitoring for up to 14 days.Exclusion criteria were any known skin allergies, conditions, or sensitivities to any of the components of the adhesive patch monitor, receiving or anticipated to receive pacing or external direct current cardioversion during the monitoring period, or the anticipation of being exposed to high-frequency surgical equipment during the monitoring period.The Zio Patch is an FDA-cleared, single-use, noninvasive, water-resistant, 14-day, ambulatory ECG monitoring adhesive patch."A study coordinator applied the device over the left pectoral region of the patient's chest. "A trigger button, integrated into the monitor's design, can be activated to create a digital time stamp on the continuously recorded data stream to synchronize the recorded ECG rhythm with symptoms.Patients were instructed to activate the trigger should they experience any suspected symptom of arrhythmia.Patients also were instructed to wear the adhesive patch monitor for as long as possible, with the goal of obtaining up to 14 days of ECG data recording.On day 14 or at any time point prior, the patient removed and returned the adhesive patch monitor by means of a prepaid mail package to iRhythm Technologies, Inc."ECG data were collected and interrogation was performed using the manufacturer's FDA-cleared, proprietary algorithm.The data then underwent technical review for report generation and quality assurance.This report was then uploaded to a secure website for independent review by physician investigators at the Scripps Translational Science Institute.Per standard institutional practice, the Holter monitor was fitted by a cardiac technician and returned at 24 hours to the cardiac investigation laboratory for interrogation.Holter monitor data were independently analyzed by physician investigators at the Scripps Translational Science Institute.Reports from both the adhesive patch monitor and the Holter monitor were made available to the referring physician.Any ECG data that were thought to be of urgent clinical concern from the Holter monitor or adhesive patch monitor, as determined by the physician investigators, were relayed to the referring physician within 24 to 48 hours.Arrhythmia events were defined as detection of any 1 of 6 arrhythmias, including supraventricular tachycardia, atrial fibrillation/flutter, pause >3 seconds, atrioventricular block, ventricular tachycardia, or polymorphic ventricular tachycardia/ventricular fibrillation.Arrhythmias were categorized into 2 groups.The first consisted of all 6 arrhythmias.The second consisted of the 5 most clinically significant arrhythmias, which excluded supraventricular tachycardia.The primary aim of the study was to compare the detection of arrhythmia events between the adhesive patch monitor and the Holter monitor over the total wear time of both devices.Secondary end points included comparison of detection of arrhythmia events over a simultaneous initial 24-hour period and survey data examining patient preference to both devices.Arrhythmia events were analyzed for 2 arrhythmia groupings including all 6 arrhythmias and the 5 more clinically significant arrhythmias as previously described."McNemar's test was used to compare if any 1 of the 6 arrhythmias or any 1 of 5 more clinically significant arrhythmias were detected by the adhesive patch monitor versus the Holter monitor for 24 hours for the Holter monitor and up to 14 days for adhesive patch monitor and then for the first 24 hours of observation for both devices.Descriptive statistics were provided for age, total wear time, and survey results."A sample size of at least 120 after attrition achieves 80% power for a 2-tailed McNemar's test.Because a planned interim analysis was performed for when 50% of patients were enrolled, the alpha for the interim analysis was 0.005 and an alpha of 0.048 was used for the final analysis.The interim analysis requirement was met, and the study was completed.The study was designed and data were collected by the Scripps Translational Science Institute.Fought Statistical Consulting independently analyzed the data.SAS 9.3 was used to perform the statistical analyses.Of the 238 patients screened, 88 declined enrollment.A total of 150 patients were enrolled, and 4 were lost to follow-up, 3 in the adhesive monitoring patch group and 1 in the Holter monitoring group.A total of 146 patients with data on both the 24-hour Holter monitor and the adhesive patch monitor were included in the final analysis.The median age for patients enrolled was 64 years, and 41.8% of patients were male.The median wear time in days for the Holter monitor and adhesive patch monitor was 1.0 and 11.1, respectively.Of the patients with complete survey data, 93.7% found the adhesive monitoring patch comfortable to wear as opposed to 51.7% for the Holter monitor."The adhesive patch monitor affected 10.5% of patients' activities of daily living as opposed to 76.2% of patients in the Holter group.When asked whether they would prefer to wear the adhesive patch monitor or the Holter monitor, 81% chose the adhesive patch monitor.Of the 102 physicians surveyed, 90% thought a definitive diagnosis was achieved using data from the adhesive patch monitor, as opposed to 64% using data from the Holter monitor.When device data were compared over the total wear time, the adhesive patch monitor detected significantly more events than the Holter monitor.For all 6 arrhythmias, the Holter monitor detected 61 arrhythmia events compared with 96 arrhythmia events by the adhesive patch monitor.Of these events, 60 were detected by both the Holter monitor and the adhesive patch monitor.The adhesive patch monitor detected 36 events that went undetected by the Holter monitor primarily as a function of prolonged monitoring.There was only 1 instance when the Holter monitor detected at least 1 event and the adhesive patch monitor did not.Because the substantially increased performance of the adhesive patch monitor may be a function of detecting less clinically meaningful supraventricular tachycardias over an extended monitoring period, supraventricular tachycardia events were removed from the total wear time analysis.The total number of arrhythmia events detected diminished for both devices, but the adhesive patch monitor still detected significantly more arrhythmia events than the Holter monitor, 41 and 27, respectively.Of these events, 27 were detected by both the Holter monitor and the adhesive patch monitor.Of note, 14 clinically significant arrhythmia events were detected by the adhesive patch monitor but went undetected by the Holter monitor.Over the total wear time of both devices, the adhesive patch monitor detected significantly more arrhythmia events when both arrhythmia groups were assessed.As a secondary outcome measure, the adhesive patch monitor was compared with the Holter monitor for detection of arrhythmia events over a simultaneous 24-hour period.In this period, the Holter monitor detected significantly more of the 6 types of arrhythmia events than the adhesive patch monitor.The Holter monitor detected 61 arrhythmia events compared with 52 arrhythmia events by the adhesive patch monitor.Of these events, 50 were detected by both the Holter monitor and the adhesive patch monitor.Of the arrhythmia events detected by 1 device but not the other, the Holter monitor detected 11 arrhythmia events that were undetected by the adhesive patch monitor in the simultaneous 24-hour period and the adhesive patch monitor detected 2 arrhythmia events that were undetected by the Holter monitor.Of the 11 events undetected by the adhesive patch monitor in the first 24 hours, 10 arrhythmia events were subsequently detected by the adhesive patch monitor beyond 24 hours.Of these 11 events, 8 were the same arrhythmia type as initially detected by the Holter monitor, with 7 being supraventricular tachycardias and 1 being short runs of ventricular tachycardia.Of the 3 arrhythmia events that were different, 2 were episodes of supraventricular tachycardia initially detected by the Holter monitor, but the adhesive monitoring patch detected paroxysmal atrial fibrillation at greater than 24 hours.The other was a single episode of supraventricular tachycardia detected by the Holter monitor and with no arrhythmia events detected by the adhesive patch monitor beyond 24 hours.Because supraventricular tachycardia is often of lesser clinical consequence, the analysis was repeated over the same initial 24-hour period excluding supraventricular tachycardias and including only the more clinically significant arrhythmias of atrial fibrillation/flutter, pause >3 seconds, atrioventricular block, ventricular tachycardia, and polymorphic ventricular tachycardia/ventricular fibrillation.Again, the Holter monitor detected more events than the adhesive patch monitor, 27 and 24, respectively, but this did not reach statistical significance.Of these events, 24 were detected by both the Holter monitor and the adhesive patch monitor.Three clinically more significant arrhythmia events were detected by the Holter monitor and not by the adhesive patch monitor, whereas the adhesive patch monitor did not detect any events that also were not detected by the Holter monitor.The benefit of prolonged monitoring is demonstrated by the fact that of the 3 clinically significant arrhythmia events initially undetected by the adhesive patch monitor in the first day of monitoring, all 3 were subsequently detected with extended monitoring beyond 24 hours, 2 of which were short runs of atrial fibrillation and the other a single short run of ventricular tachycardia.Although the Holter monitor detected significantly more events than the adhesive patch monitor over the initial 24-hour monitoring period, when limited to more clinically significant events, 3 events went undetected but were subsequently detected with monitoring beyond 24 hours.The Zio Patch is an FDA-cleared, noninvasive continuous ambulatory ECG adhesive monitoring patch that is less cumbersome to wear than a conventional 24-hour Holter monitor.With 93.7% of patients finding the adhesive patch monitor comfortable to wear and 81% indicating they would prefer it over the Holter monitor, it is clearly a less-obtrusive and more patient-friendly monitoring platform.Furthermore, physicians thought a definitive diagnosis was achieved more often using the adhesive patch monitor as opposed to the Holter monitor.The convenience of sending and returning the adhesive patch by mail and the durable capture of ECG rhythm data over the substantially longer monitoring period of up to 14 days offer some distinct theoretic advantages.However, the reference standard is the 3-lead, 24-hour Holter monitor, and the value of a single-lead, 14-day adhesive patch monitor needs to be assessed in comparison with this standard.Our study demonstrated increased arrhythmia diagnostic yield using the prolonged adhesive patch monitor compared with conventional 24-hour Holter monitoring.Although short of the approved 14-day wear time, the median adhesive patch monitor wear time of 11.1 days in this study is likely a sufficient diagnostic window to capture arrhythmia events, because the highest diagnostic yield for arrhythmia detection is usually the first 7 days of ambulatory ECG monitoring.12,Ambulatory ECG monitoring beyond 7 days often provides only an additional 3.9% of patients with a diagnosis.12,Furthermore, the cost of extended monitoring periods beyond 2 weeks using older technologic platforms can range up to $5832 per new diagnosis with a disappointing 0.01 diagnosis per patient per week after 2 weeks.This compares with a per patient diagnosis cost of $98 over an initial 7 days and $576 over a 14-day period, again based on older, often more expensive platforms.12,An average wear time of 11.1 days is then likely to achieve a reasonable balance of adequate diagnostic yield at a reasonable cost per new diagnosis using newer, potentially cheaper technology.Furthermore, the adhesive patch monitor can achieve monitoring periods equal to older event recorder platforms but using less cumbersome technology.Consistent with our findings, there is substantial evidence to suggest that extending the ECG monitoring period beyond 24 hours increases the diagnostic yield of arrhythmia diagnosis.To date, however, this could only be achieved using bulky, activity-limiting technology requiring multiple chest leads.13,Although previous studies have demonstrated the incremental diagnostic yield of prolonging the monitoring period, in this study the extended monitoring was achieved with a more lightweight, unobtrusive, adhesive, patch device.14-16,Primarily as a function of extended monitoring, the adhesive patch monitor detected 36 events that went undetected by the Holter monitor.Using the incremental diagnostic yield of an extended monitoring period as opposed to relying on data acquisition during brief, often asymptomatic periods is critical, because even prolonged pauses of up to 9.7 seconds can be asymptomatic.17,Over a simultaneous 24-hour monitoring period, the Holter monitor detected more arrhythmia events than the adhesive patch monitor for both groups of arrhythmias used in this study."The Holter monitor's performance advantage in this timeframe, with detection of 11 arrhythmia events not detected by the adhesive patch monitor, was unexpected and warranted explanation.A root cause analysis was performed to determine the reason for these discrepancies, and each of these cases was then run for a second time through the iRhythm Technologies algorithm and Quality Assurance Tool and reviewed in the page view format, which allowed full visual review of the continuously running ECG data.Of the 11 discrepant arrhythmia events, 2 can be explained by an algorithm misclassification and 7 by a processing error by the initial iRhythm Technologies Inc, physician reviewer.With respect to the algorithm misclassifications, in one instance the algorithm did not detect the arrhythmia event possibly because of transiently reduced signal quality and in the other instance classified a brief run of supraventricular tachycardia as a sinus tachycardia, because it fell into a rate range just outside of the set supraventricular tachycardia zone.In light of this, the adhesive patch monitor supraventricular tachycardia zones have been adjusted.As part of the report generation, an iRhythm physician performs an initial overview of all detected potential arrhythmia events and classifies them accordingly.In 7 of the discrepant arrhythmia event cases, short runs of mostly supraventricular tachycardia were not classified as such and therefore were never surfaced to the report viewed by the ordering physicians or investigators.In-house iRhythm staff training has been implemented to correct this issue.In general, the information provided by the Holter monitors additional 2 ECG leads are an obvious advantage for both automatic algorithm analysis and physician interpretation.Specifically, 3-lead recordings allow for the detection of arrhythmia events characterized by a shift in electrical axis that can be missed by single-lead recordings.Multi-lead recordings also allow for improved detection of aberrant/broader QRS complexes where a single-lead recording may not detect the altered QRS complex width because the leading edge or trailing edge of the QRS complex may be relatively isoelectric to the single-lead recording vector.These differences may then apply more so to broad complex tachycardia arrhythmia detection rather than narrow complex arrhythmia detection.Evidence from our study supports this, with no episode of atrial fibrillation/flutter detected by the Holter monitor going undetected by the adhesive.Of the more clinically meaningful arrhythmia events initially undetected by the adhesive patch monitor, in the simultaneous 24-hour monitoring period, all subsequently had a clinically meaningful arrhythmia event detected with prolonged monitoring by the adhesive patch monitor.These arrhythmia events were the same in all 3 cases, with 2 episodes of paroxysmal atrial fibrillation and 1 episode of a brief run of ventricular tachycardia being detected with extended monitoring by the adhesive patch monitor.The patients enrolled included all those referred for ambulatory ECG monitoring rather than for determination of a previously undocumented arrhythmia.Although the majority had no previously documented arrhythmia, several had preexisting arrhythmias and were referred for reasons other than symptomatic arrhythmia.In practice, the adhesive patch monitor is mailed to and self-applied by the patient, whereas in this study it was applied by a study research coordinator.Over the total wear time of both devices, the adhesive monitoring patch detects significantly more arrhythmia events than the Holter monitor.On the basis of these findings, novel, single-lead, prolonged-duration, low-profile devices may soon replace conventional Holter monitoring platforms for the detection of arrhythmia events in patients referred for ambulatory ECG monitoring.
Background: Cardiac arrhythmias are remarkably common and routinely go undiagnosed because they are often transient and asymptomatic. Effective diagnosis and treatment can substantially reduce the morbidity and mortality associated with cardiac arrhythmias. The Zio Patch (iRhythm Technologies, Inc, San Francisco, Calif) is a novel, single-lead electrocardiographic (ECG), lightweight, Food and Drug Administration-cleared, continuously recording ambulatory adhesive patch monitor suitable for detecting cardiac arrhythmias in patients referred for ambulatory ECG monitoring. Methods: A total of 146 patients referred for evaluation of cardiac arrhythmia underwent simultaneous ambulatory ECG recording with a conventional 24-hour Holter monitor and a 14-day adhesive patch monitor. The primary outcome of the study was to compare the detection arrhythmia events over total wear time for both devices. Arrhythmia events were defined as detection of any 1 of 6 arrhythmias, including supraventricular tachycardia, atrial fibrillation/flutter, pause greater than 3 seconds, atrioventricular block, ventricular tachycardia, or polymorphic ventricular tachycardia/ventricular fibrillation. McNemar's tests were used to compare the matched pairs of data from the Holter and the adhesive patch monitor. Results: Over the total wear time of both devices, the adhesive patch monitor detected 96 arrhythmia events compared with 61 arrhythmia events by the Holter monitor (P <.001). Conclusions: Over the total wear time of both devices, the adhesive patch monitor detected more events than the Holter monitor. Prolonged duration monitoring for detection of arrhythmia events using single-lead, less-obtrusive, adhesive-patch monitoring platforms could replace conventional Holter monitoring in patients referred for ambulatory ECG monitoring. © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
402
Towards prevention of post-traumatic osteoarthritis: report from an international expert working group on considerations for the design and conduct of interventional studies following acute knee injury
Osteoarthritis pathologically represents a continuum from risk exposure, to molecular changes and structural changes with associated pain, which for some people progresses to the need for joint replacement.Detection and treatment of those at high risk of OA could enable effective interventions before any major structural damage has occurred or before pain becomes chronic, that is at a pre-radiographic or even pre-symptomatic stage.Such intervention would be comparable to current early management of diabetes, cardiovascular disease, osteoporosis or pre-rheumatoid arthritis.Joint injury remains one of the biggest risk factors for OA.In Sweden, approximately 80/100,000 people per year experience anterior cruciate ligament rupture; in the U.S. there are 252,000 ACL injuries per year1,2.50% of people with significant knee joint injuries such as ACL rupture and/or acute meniscal tear subsequently develop symptomatic radiographic OA within 10 years, so-called post-traumatic OA3; at least 33% with acute ACL rupture will have magnetic resonance imaging-defined whole joint OA after 5 years4.PTOA is thought to comprise around 12% of all OA, although its incidence appears to be increasing5,6.However, there are no specific guidelines for clinical trials which seek to measure the effect of interventions for prevention of OA after an injury7,8.There are a number of challenges in study design specific to this area, especially the potentially long study duration needed.As such, regulatory considerations include the identification of surrogate outcomes for PTOA studies and the creation of a new indication: OA prevention.This has led to significant uncertainty for regulatory agencies and drug developers, and has restrained investments by the pharmaceutical industry.An international expert working group was therefore convened with the following aims: to review the literature on existing interventional studies close to the time of knee injury; give an overview of key areas in the field relevant to future interventional studies; define considerations for the conduct and design of trials aimed at prevention of OA; and to highlight knowledge gaps by developing research recommendations in this area."The considerations process was facilitated by the Osteoarthritis and Crystal Diseases Clinical Studies Group of Arthritis Research UK, which was established to develop consensus research priorities and nurture methodologically robust clinical trials.Whilst preventing joint injury is an intervention to prevent PTOA7, our focus was on interventions after knee joint trauma.We conducted an evidence review, then consensus process developing considerations and a research agenda.Though the evidence review summarized the use of outcome measures including patient reported outcome measures, no recommendations for specific outcome measures were planned.An evidence review was conducted to identify experimental, interventional studies following acute knee injury with specific reference to post-traumatic knee OA.Systematic searches were conducted across five databases from inception to August 2016.The search strategy was designed in OVID-Medline using text words and subject headings, combining terms for knee injury, osteoarthritis and clinical trials or systematic reviews.All references were imported into Endnote where duplicates were removed.Screening and study detail extraction was by NC, verified by three others.Study inclusion criteria were as follows: population clearly stated within 6 months of acute knee injury; interventional study with any comparator; OA or a surrogate outcome measure; reported randomized controlled trials, non-randomized controlled trials or systematic reviews.Study exclusion criteria included: ‘acute’ injury not clearly separated from ‘chronic’, or from other joint disease; non-English-language articles; letters, comments or editorials.Observational studies of interventions in this area were not included in our evidence search or considerations, as they were felt to be prone to bias and not representative of our main focus which related to experimental studies.A group of 32 stakeholders, including physiotherapists, orthopaedic surgeons, rheumatologists, sports and exercise medicine physicians, primary care physicians, radiologists, laboratory scientists, statisticians, clinical trialists, engineers, pharmaceutical company experts and four patient representatives comprised the consensus group.After the evidence review results were circulated, the group convened at a face-to-face meeting.The evidence review, which included a summary on the use of PROMs, was presented and overviews of literature-identified key areas were given by invited experts: challenges around studies in this area, molecular biomarkers and imaging.Specific case study examples of potential interventional targets and challenges were presented.Three working groups with facilitators and reporters were convened to consider: A: Eligibility criteria and choice of outcomes, B: The use of biomarkers as potential stratifiers or outcomes, and C: Definition of the injury, the timing of intervention, and considerations for multi-modality interventions."Written notes were compiled, presented by each group's reporter to all stakeholders and agreement on items and additional overarching points to consider were generated during a final discussion session, chaired by PC, with written statements agreed by all.The meeting was taped and transcribed; any uncertainties were addressed from the transcript.Subsequently, the document and then manuscript was reviewed by all contributors through an iterative online process.The initial search identified 2476 citations.945 duplicates were removed.Screening of the remaining 1531 abstracts yielded 43 eligible studies.Seven systematic reviews identified a further 15 reported trials.From these 58 papers, 37 unique studies were included.Details of each study are summarized in Supplementary Table 2.The majority of studies involved ACL injury, patellar dislocation or tibial plateau fracture, with the remaining two studies including any ‘acute knee injury’.Table I summarizes the basic study details grouped according to type of injury.All but two studies were RCTs.Of 16 studies reporting power calculations, 15 met or exceeded the sample size required.Study duration varied widely, approximately equally distributed across 0–1 years, >1–5 years and >5 years.Most studies compared a surgical intervention against either another surgical or non-surgical/non-pharmacological intervention.Comparisons of post-operative rehabilitation interventions, pharmacological studies and all other interventions each accounted for ∼8% of all studies.An overview of inclusion and exclusion criteria for all available full-text papers is shown in Supplementary Table 3.Most studies had clearly defined eligibility criteria.Sixty percent provided a specific age range, spanning 13–50 years old.Sex was a specified criterion in only three studies, one of which excluded females.Elite professional sports activity and pregnancy were exclusions in 20% of studies.Pre-existing conditions or other concomitant injuries excluded patients in 80% of studies.For example, previous index knee injury and/or surgery were exclusions in >60% of studies and the presence of OA was an exclusion criterion in 25% of studies.One-hundred-and-forty-seven outcome measures were identified, including physical examination outcomes, patient-reported outcomes of which the Knee Injury and Osteoarthritis Outcome Score was most frequently used5, imaging outcomes, biomarkers and other.Primary outcome measures were identified by only 19 studies.Ten different OA outcomes included nine imaging structural measures and one surrogate measure, KOOS.Only five studies used molecular biomarkers as outcome measures.Recently there has been an increase in our understanding of the molecular pathogenesis of PTOA.Observations from both humans and animal models reveal that diverse signalling pathways are activated by injury9,10.This activation is associated with subsequent bone remodelling, cartilage matrix damage and synovial inflammation11,12.Synovial fluid at the time of joint injury shows marked increases in pro-inflammatory cytokines and within 2 weeks shows evidence of matrix catabolism of both aggrecan and type II collagen13–15.The response appears to differ between individuals, and is represented by a tissue inflammatory response, primarily detectable in the synovial fluid13,14,16.Following injury, a variety of factors may encourage joint homeostasis and resolution, or progression to post-traumatic OA.Further injury or surgery would appear to prolong the inflammatory response to trauma17.There may be an ‘early therapeutic window’ following joint injury during which inflammatory response genes are up-regulated and matrix degradation is initiated which could be targeted by intervention18.The optimal and/or latest times at which degradation could be halted or reversed are currently unknown.Much work on OA pathogenesis has been accomplished in animal models, which exploit the association between joint injury and OA, using trauma or surgically-induced injury to predictably induce disease: they are therefore particularly suited to testing early interventions in this setting.Findings from murine models such as those involving destabilization of the medial meniscus appear to translate to human studies of ACL rupture or meniscal tear14.The effects of suppressing certain key pathways in these models have been described in knockout mice19.Despite this, very few interventions have been tested at the time of injury, in rodents or in man, as opposed to established OA, which could account for some of the failure of translation of OA therapeutics to date.However, there may be some molecular differences as well as some practical challenges in the testing of intra-articular agents in small animals and in the extrapolation of optimal timing of an intervention from rodent to man.Glutamate concentrations are increased in synovial fluid of arthritic joints in humans and animals, activating glutamate receptors on neurones and synoviocytes to induce pain and cause release of IL-620,21.Intra-articular inhibition of α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid and kainate glutamate receptors at the time of injury or induction of arthritis in rodent models alleviates pain, inflammation and joint degeneration22,23.IL-1 causes cartilage degradation in vitro and is upregulated in synovial fluid following joint injury24,25.Blockade of this pathway) reduced inflammation and degeneration in a mouse model of arthritis26.IL-1 or AMPA/kainate receptors represent potential therapeutic targets for preventing later disease, as their inhibition at the time of injury in models of post-traumatic OA reduced disease.IL1RA is the first therapeutic agent to be tested in human pilot studies at the time of knee injury for this indication27.A further example is a small RCT testing steroid injection within 4 days of ACL tear, where the collagen degradation biomarker CTX-II was significantly reduced in synovial fluid in the steroid-treated arms15.Since AMPA/kainate receptor antagonists, IL1RA and steroids are already used in man, re-purposing of existing agents is a real possibility.Imaging-based change following knee injury reflects the initial trauma but also the responses to subsequent changed dynamic knee loading after destabilizing injuries28.The majority of studies include X-ray and MRI cartilage outcomes, both semi-quantitative and quantitative.Although there are a few high-quality longitudinal imaging studies after ACL rupture, more studies are needed.It is possible to define early OA on either X-ray or MRI, and evidence indicates that MRI changes alone can act as an endpoint29.Depending on the target, non-cartilage MR outcomes, either bone-based, such as bone marrow lesions or synovitis-effusion, may be appropriate.Compositional measures using MRI, positron emission tomography or computed tomography remain investigational.Composite metric sequences including T1ρ and T2 have been associated with the PROM KOOS, pain after ACL reconstruction and with synovial fluid biomarkers at the time of surgery30–32.Change in these compositional measures may reflect differences in surgical factors after ACL reconstruction and the pre-injury joint structure33.Consistent changes in cartilage thickness occur after ACL rupture: two cartilage regions quickly increase in thickness over time, whilst other areas decrease34.Within 3 months of ACL injury, there are marked changes in knee bone curvature35.Patellofemoral joint OA appears more prevalent in cohort studies, particularly relating to ACL rupture/reconstruction; however, the PFJ is not always examined by X-ray.Structural changes generally develop slowly, and traumatic and degenerative changes must be clearly separated, although may appear similar.Common OA assessment semi-quantitative instruments are only partially applicable in this setting: Whole Organ Magnetic Resonance Imaging Score, BLOKS and MOAKS do not differentiate between traumatic and degenerative joint changes, and do not include assessment of post-surgical graft integrity36.Anterior Cruciate Ligament Osteoarthritis Score is a new tool which addresses some of these issues including clear differentiation of traumatic from degenerative BMLs, extent of baseline traumatic osteochondral damage and assessment of the graft37.A systematic review in this area reported that meniscal lesions, meniscectomy, BMLs, time from injury and altered biomechanics all are associated with cartilage loss over time after ACL rupture38.Greater cartilage damage at baseline is associated with worse clinical outcome39–41.Presence of cortical depression fractures is associated with a worse International Knee Documentation Committee score at 1 year42.MRI-detected inflammation markers at 2 years after ACL rupture were associated with OA development at 5 years43.Effusion, or presence of BMLs at 1 year, or meniscal tears at any stage were found to be associated with radiological OA at 2 years39.Early bone curvature change is predictive of cartilage loss at 5 years and accentuated by the presence of meniscal injury35.These are summarized under overarching considerations and three main areas: eligibility criteria, outcome measures and definition and timing of interventions and comparators in these studies.Key overarching considerations are included in Table III.It was emphasized that a better understanding of disease pathogenesis was important.The appropriate time-window, role and effects of a proposed intervention on underlying processes such as inflammation, mechanical loading and subsequent bone or cartilage change need to be elucidated.Some findings may usefully be translated from animal models; however, it was also noted that there may be important differences between the response to acute knee trauma and a discrete surgically-induced isolated injury to ACL or meniscus.It was agreed that the considerations highlighted in this paper should be reviewed periodically as more data become available, with a maximum of 3 years before the next revision.Eligibility criteria should be clearly defined and should identify specific groups with a modifiable process following their injury in which to test the intervention.Examples of well-defined groups based on MRI to be included would be ACL tear combined with other injuries such as traumatic meniscal tear44, or chondral damage/cortical depression fracture42.Degenerative meniscal lesions should be considered part of early OA and not included in acute post-trauma studies45.55% of patients sustain simultaneous injuries to both ACL and meniscus46; the ubiquitous biological response to joint tissues injury supports broader inclusion of injury sub-types.Combined ligament injuries or fractures should not necessarily be excluded but considered as a separate ‘extreme’ phenotype, as they may be at substantially increased OA risk, which may or may not be reversible.Some interventions may be most effective if exerting their effect as soon as possible after the early biological changes after injury."The appropriate time window for any intervention after injury needs to be carefully justified, according to it's nature.Those less than 30 years are more likely to have purely traumatic meniscal lesions; those over age 35 could be at risk of pre-existing OA/degenerative meniscal lesions.Elite athletes are more likely to have past/repeated injuries but may have different responses to injury compared to non-elite individuals.As elite athletes are at high risk of OA, they still represent a relevant subgroup for investigation.Previous substantial knee injury or surgical procedure to the index knee may confound results and should be considered as a possible exclusion.BMI should be documented: excessive obesity has independent effects on disease risk, joint loading and inflammation.Key considerations are shown in Table V.In addition to the collection of longer-term PROMs, repeated, multiple early measures will allow examination of potential earlier surrogate endpoints in the future.Baseline and longitudinal evaluation should differentiate pre-existing degenerative from acute traumatic structural joint damage.The contralateral knee may subsequently be affected, therefore differentiating index from control knee is important.Considerations around type of imaging and its frequency include evidence of specific outcome performance metrics, feasibility and cost.Where trials are multi-center, MRI protocols need to be carefully designed.Selection of imaging biomarker requires understanding of the validity, reliability and responsiveness of each measure.MRI techniques that assess early cartilage changes may be useful.Measures of synovial or fat pad inflammation may be important for anti-inflammatory therapeutics and MRI techniques that quantify synovitis may be considered.Early changes in 3D bone shape seen after injury which predict subsequent OA warrant further study as a potential surrogate endpoint.These were noted to be under development as stratifiers and as outcome measures: none were yet sufficiently evidence-based to act as independent surrogate measures as either an early OA diagnostic, prognostic or patient selection aid for interventional studies.Irrespective of target, to accelerate therapeutic advances, it is important that bio-samples be collected in all cohorts and clinical trials where possible.DNA storage would allow the international community to work collaboratively to identify novel genetic predictors of outcome.Synovial fluid was highlighted as a potentially important biosample, showing biologically important molecular changes after injury and after intervention; synovial fluid molecular changes are likely to have increased utility compared to serum13–15.Contralateral aspiration of synovial fluid was controversial, as the contralateral knee is not always a good control and it is difficult to aspirate normal joints.It is important that non-surgical studies access synovial fluid to avoid bias towards surgical intervention studies.In some cohorts, serum/plasma/urine may be available prior to the injury: measuring change within an individual was noted as analytically powerful.Regarding biomarker choice, the most qualified biomarkers to date, e.g., CTX-II, could be included if cartilage matrix catabolism is a target; synovial inflammation or bone biomarkers, or specific cytokine measurements may be relevant depending on target27.Symptoms of instability could be more reliable than any examination-based measures.However, their sensitivity to change compared with existing measures such as pain should be evaluated further47.The choice of timing of the intervention will depend on the nature and mode of its action and intended effects, as well as the measured outcome.An optimal ‘therapeutic window’ should be carefully defined for any intervention, see also Eligibility Criteria: ‘Time Since Injury’.It may be that identification of high risk phenotypes is possible by imaging or molecular biomarkers at defined times after the injury.Types of intervention are highly varied; where multi-modality interventions are used, these should be carefully defined, and controlled.Drugs could be given systemically or intra-articularly, as single or multiple doses, dependent on agent and duration of treatment, safety considerations and acceptability.A comparator and/or placebo or sham arm should be used, because of the known substantial placebo effect in OA studies48.The comparator will often be standard or usual care, rather than no treatment and requires careful definition.Randomization and placebo control are important principles not only for pharmacological interventions, but also for device and surgical studies, where a large placebo effect would be anticipated and which is not otherwise controlled49.There are a number of practical considerations for successful recruitment, randomisation strategies, the standardisation of the intervention and allocation concealment in these types of studies, particularly when they are multi-site18.This should be carefully considered during study design and a number of existing OARSI recommendations in trials of prevention of joint injury and of established OA are highly relevant here7,8,50,51.The particular challenges and questions highlighted as needing further research are included in Table VII.Patient representatives highlighted concerns for the potential for over-diagnosis or overtreatment in the absence of risk stratification, and further Patient and Public Involvement is encouraged in this area now, and as the field develops.Further evidence is needed for which outcomes should be used in this setting, and what measurement might act as an acceptable surrogate short term outcome for future OA.Although these current considerations address interventional studies, the consensus group acknowledged that ancillary/cohort studies which establish associations between PROMs, biomarkers and imaging outcomes could address key knowledge gaps to provide evidence for future trials.The design of these studies should be carefully considered and outcomes appropriately powered, but they may include more exploratory outcomes.Sensitive, specific early measures which might shorten studies should be sought.The Consensus group noted that animal studies can inform human studies, and such programs were justifiable to facilitate early translation of targets to humans.Our review of the literature has highlighted a lack of conformity in design of interventional studies in this area.Evidence from the review and expert consensus has been synthesised in producing these first international considerations on the design and conduct of interventional studies aiming at prevention of OA following acute knee injury.Critical knowledge gaps limiting such trials have been highlighted, and summarised as research recommendations.These considerations are intended to underpin future guidelines as this field evolves.Collaborative working on cohort and feasibility studies is needed to provide better evidence for interventional study design.Studies need to include those patients who are at the highest risk, but whose risk is modifiable by the proposed intervention.There was an awareness of the identity of extreme phenotypes, such as combined ligament injuries, which may fall outside these criteria.As in OA, predictive risk modelling is needed for knee trauma52.A better understanding of underlying disease mechanisms from both animal and human studies is needed.Understanding how related mechanisms such as inflammation and mechanical loading of the joint after trauma contribute to either resolution or progression to OA was deemed essential for the development of new interventions.The feasibility and acceptability of testing interventions in an acute setting can be challenging.Informed consent for sham or placebo treatments at the time of knee injury needs careful review by patients, healthcare providers and trialists.Sham-controlled trials including surgical trials are often needed to provide the best possible level of evidence49.Recent consensus in classification of early knee OA will facilitate such trials53.Alternative surrogate outcome measures need to be developed to shorten trial duration and improve the likelihood of drugs being developed by industry.MRI costs are relatively high, but may be justified by allowing researchers to examine earlier outcomes."Whilst X-ray follow-up may appear more feasible, it's use as a lone imaging modality must be adequately powered.There are some limitations to the approach used.The literature review was performed to provide evidence for discussions, rather than as a stand-alone piece of work; it was clear after the initial search that areas of interest, such as pharmacological interventions, were not well represented in the current literature, and limitations of generalizability to all types of interventions should therefore be borne in mind.A critical appraisal of the studies was not performed as it was not felt necessary for the requirements of this review, which was pragmatic in nature.Given the relatively low number of RCTs identified in this area, non-randomized controlled trials as well as RCTs were included where identified.Not all opinions might be equally represented from this type of approach.However, a wide range of stakeholders and groups were involved, including patients.Effort was made to ensure diversity; pre-appointed facilitators and reporters with note-keeping and voice recording of sessions ensured a transparent and consistent process.More detailed discussions on considerations of recruitment/randomization/allocation concealment strategies were beyond our scope54.In summary, these initial considerations provide a starting point for further work in this area.These points are intended to be complimentary to, and should be considered alongside, OARSI Clinical Trials Recommendations on prevention of joint injury, the design, analysis and reporting of OA RCTs and clinical requirements for development of therapeutics in OA7,50,51,55.The regulatory considerations for a new indication of preventing symptoms or OA structural change following joint injury are unique.Engagement with both regulators and the pharmaceutical industry is essential if the area is to progress and overcome current hurdles.Although such trial designs may be challenging, in order to develop new therapeutics with the aim of patient benefit, the consensus was that progress in this area is both possible and urgently required.All authors made substantial contributions to all three of sections, and below:acquisition of data, or analysis and interpretation of data.drafting the article or revising it critically for important intellectual content.final approval of the version to be submitted.DJM, FEW and PGC in addition conceived and designed the work, and take collective responsibility for data integrity as a whole.ML – employee of Abbvie pharmaceutical company.FWR – Shareholder Boston Imaging Core Lab., LLC., a company providing radiologic image assessment services to academia and the pharmaceutical industry.DJM – co-inventor on a patent related to the use of glutamate receptor antagonists to prevent osteoarthritis.The meeting was funded by Arthritis Research UK as part of the Clinical Studies Group for Osteoarthritis and Crystal Diseases programme.In addition, the Arthritis Research UK Centre for Osteoarthritis Pathogenesis and Arthritis Research UK Biomechanics and Bioengineering Centre gave support via their joint organization of the meeting.FEW and PGC are also part of the Arthritis Research UK Centre for Sports, Exercise and Osteoarthritis.No representation from Arthritis Research UK was present at the meeting.No specific funding was received from any other funding bodies in the public, commercial or not-for-profit sectors to carry out the work described in this manuscript.PGC and SK are supported in part through the NIHR Leeds Biomedical Research Centre, and FEW in part by the NIHR Oxford Biomedical Research Centre and the Kennedy Trust for Rheumatology Research.The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.
Objective: There are few guidelines for clinical trials of interventions for prevention of post-traumatic osteoarthritis (PTOA), reflecting challenges in this area. An international multi-disciplinary expert group including patients was convened to generate points to consider for the design and conduct of interventional studies following acute knee injury. Design: An evidence review on acute knee injury interventional studies to prevent PTOA was presented to the group, alongside overviews of challenges in this area, including potential targets, biomarkers and imaging. Working groups considered pre-identified key areas: eligibility criteria and outcomes, biomarkers, injury definition and intervention timing including multi-modality interventions. Consensus agreement within the group on points to consider was generated and is reported here after iterative review by all contributors. Results: The evidence review identified 37 studies. Study duration and outcomes varied widely and 70% examined surgical interventions. Considerations were grouped into three areas: justification of inclusion criteria including the classification of injury and participant age (as people over 35 may have pre-existing OA); careful consideration in the selection and timing of outcomes or biomarkers; definition of the intervention(s)/comparator(s) and the appropriate time-window for intervention (considerations may be particular to intervention type). Areas for further research included demonstrating the utility of patient-reported outcomes, biomarkers and imaging outcomes from ancillary/cohort studies in this area, and development of surrogate clinical trial endpoints that shorten the duration of clinical trials and are acceptable to regulatory agencies. Conclusions: These considerations represent the first international consensus on the conduct of interventional studies following acute knee joint trauma.
403
OPEC's kinked demand curve
The price of crude oil has been less stable, and marked by upward shocks, and world economic growth has been slower, since the Organization of Petroleum Exporting Countries first wielded its market power assertively in 1973.1,Before then, major oil companies known as the “Seven Sisters”, in conjunction with the Texas Railroad Commission, stabilized price above marginal cost, using tacit collusion and secret agreements to elude the Antitrust Division of the U.S. Department of Justice.2,Fig. 1 shows the log real price of West Texas Intermediate crude oil and the real rate of growth of the world economy from 1951 to 2010.Why have prices been unstable during the “OPEC era”?,What is the effect on the macroeconomy, and what types of policy responses would stabilize and increase macroeconomic growth and employment?,The main contribution of this paper is to help answer the former question.It contains estimated net demand to OPEC, including effects of oil prices on world GDP that allow for differences in responses to increases and decreases in price.Estimated asymmetric effects imply multiple equilibrium prices in the cartelized market, and the range of equilibria represents a measure of potential instability in price.Due to the asymmetry, the greater the instability in the price of crude oil, the lower are macroeconomic growth and employment.Poor national economies are more oil-intensive than rich economies, so the effects of the asymmetry are experienced disproportionately in poor countries.,Policies that narrow and lower the range of equilibrium oil prices, then, raise GDP and employment, especially in poor countries."These include policies that make net demand to OPEC more price-elastic, policies that reduce net demand to OPEC, and policies that lower OPEC's rate of time preference.A corollary to the latter is that monetary policy is more effective at accelerating or retarding economic activity when OPEC has a larger market share.The main welfare criterion used in this article is world GDP.Bloom and Canning confirm that the positive relationship between national income and life expectancy identified by Preston continues to hold."Ensor et al. find that recessions increase maternal and infant mortality in the earlier stages of a country's economic development.Pugh Yi summarizes literature and U.S. data, argues that poverty, both cyclical and structural, causes abortion, and concludes that raising employment and stabilizing the macroeconomy would reduce abortion.Fig. 2 shows the real price of crude oil from 1973:IV to 2011:II.Worldwide recessions in my constructed data are shown in vertical bars.Three of the five recessions were preceded by oil price shocks, and none of the oil price shocks failed to precede a recession.Far and away the two largest quarter-to-quarter price increases in the OPEC era were $21.41 between 1973:IV and 1974:I and $22.04 between 2008:I and 2008:II.There was a slowdown in the world economy in 1974:II, a recession beginning in 1974:III, and a recession beginning in 2008:III.The third largest increase in price during the OPEC era of $12.63 occurred between 2007:III and 2007:IV.The large shock in 1973 preceded a long term slowdown in world economic growth, and the 2008 recession has been termed “Great”.Over two quarters, from 1978:III to 1980:I, price increased $39.51.GDP declined at an annual rate of more than 8% from 1980:I to 1980:II and 0.7% the following quarter.The price of oil may change in response to a macroeconomic shock or to a shock to production of oil.One might argue that instability in price occurs because OPEC is not consistently effective at counteracting the impacts of such shocks on price.OPEC has been described as “clumsy”,4 but I argue that 1) asymmetric effects of price on GDP incent OPEC to allow shocks to cause price to fluctuate more than it would with symmetry, and I observe that 2) the asymmetry causes fluctuations in price to reduce GDP over time, a bad combination for the world economy.In the dataset used here, the impact of changes in the price of crude oil on the macroeconomy is negative, but the correlation between price and world GDP is positive, and is significant at the 99% level.Variation in price originates more in shocks to GDP than in shocks to production of oil.Variation in price originating in shocks to production of oil is countercyclic, destabilizes the consumption of consumers, and makes the incomes of producers of oil, including OPEC, less procyclic.Variation in price originating in shocks to GDP is procyclic, smooths the consumption of most consumers, and makes the incomes of producers more procyclic.Since variation in price originates mainly in variation in GDP, unstable oil prices overall tend to smooth the consumption of consumers and make the incomes of producers of oil more procyclic.However, if OPEC production changes, as when war or civil conflict causes production in an OPEC country to fall, the resulting “oil price shock” will slow the macroeconomy, covariation will be inverse, and profits to OPEC, and other sellers of oil, will be countercyclic.OPEC can use such countercyclic profits to hedge the systematic risk to world GDP caused by the shock to price.In Section 4.4.3, I estimate demand to be inelastic in the short term, so, assuming increasing marginal cost, an increase in price will raise revenue, lower cost, and lower world GDP.The countercyclic profits can be securitized in advance in a financial instrument that commands a risk premium in financial markets.The premium obtains because such instruments can be used to smooth out undesirable fluctuations in consumption associated with the macroeconomic instability caused by the changes in price.Teitenberg explains that periods of high oil prices may leave developing nations short of foreign exchange.There is no economic risk more systematic than that to the world economy, and OPEC can sell insurance against that risk inasmuch as it results from changes in production.As noted, variation in price overall is consumption-smoothing, and causes procyclic variation in the incomes of producers of oil, including OPEC, but policies, such as trading strategic stocks of crude oil, capable of mitigating variation in price originating in changes in production will not only raise GDP over time, but also smooth the consumption of consumers, and make the incomes of producers of oil more procyclic.Because of multiple equilibria leading OPEC to accept shocks to price originating both beyond and within the cartel, and because of countercyclic profits associated with shocks to production of oil, OPEC may find variation in price more profitable than stable prices.The multiple equilibria result from asymmetry in the effects of changes in oil prices on the macroeconomy.The asymmetry also implies that instability in the price of oil lowers economic growth and employment over time, and I proceed here on the assumption that this loss in GDP is greater than any net benefits of consumption-smoothing, though preference may be given to policies that mitigate volatility in price originating in shocks to production, rather than in shocks to GDP.I review literature in Section 2.I describe method, model, and data in Section 3.I present and discuss estimates of world demand for and non-OPEC supply of crude oil, and the effects of crude oil prices on world GDP, in Section 4.The discussion includes estimated ranges of equilibrium prices and elasticities.I conclude, discuss policy implications, and mention further research in Section 5, and I cover more detailed aspects of the econometrics in the appendix.Wirl remarks that it is surprising how few articles have been written attempting to explain volatility in oil prices during the OPEC era: “Given this record of volatility it is surprising that only few papers attempt to explain these ups and downs of oil prices.,He proceeds to list some, however, including Cremer and Isfahani, Rauscher, Gately and Kyle, Powell, Suranovic, Rauscher, Wirl, Wirl and Caban, Hamilton, Kilian.Explanations relate to capacity utilization, dynamic demand, convex demand, and uncertainty.Cremer and Isfahani and Rauscher “refer to multiple equilibria due to backward bending supply curves”, but not due to asymmetry in demand."This article contributes to this discussion both a theory and estimates of the role of asymmetry in the effects of crude oil prices on world GDP, working through OPEC's maximization of profit, in explaining volatility in the price of crude oil.In addition to the literature explaining volatility in oil prices, there is literature related to this article on asymmetric effects of oil prices on GDP and demand, and on the OPEC cartel.Gately and Huntington, Balke et al., Hamilton, Greene and Ahmad, and others have found that increases in the price of oil lower world GDP, and demand for oil, more than decreases in price raise them.Reasons include nominal rigidities, allocative disturbances and uncertainties, income and liquidity effects, and large transfers of wealth.Mork emphasizes the role of downwardly sticky wages.Lin Lawell finds that non-OPEC producers behave like price takers if price elasticity of supply is held constant over time, and that they behave like Cournot oligopolists if it is allowed to decline over time.Like Lin Lawell, this article “builds upon existing empirical studies of the petroleum market by addressing the identification problem that arises in empirical analyses of supply and demand.,According to Kaufmann, non-OPEC producers are price takers, and OPEC behavior fits a variety of non-competitive models.Jin et al. model the world oil market as a Stackelberg game with a dominant firm and competitive fringe producers."Celta and Dahl estimate OPEC's marginal costs.I estimate world demand for crude oil, non-OPEC supply, the effect of crude oil prices on world GDP, and, therefore, net demand to OPEC.The results are used to show estimated elasticities of demand, non-OPEC supply, and world GDP, and the range of equilibrium prices in the cartelized market for crude oil.In their survey of literature on energy demand, Atkins and Jazayeri discuss three major areas of refinement to the traditional model of demand that apply to crude oil: asymmetry; regime change; and changing seasonal patterns.Increases in the price of crude oil affect quantity demanded and GDP differently from decreases.Regarding demand, according to Atkins and Jazayeri, asymmetry in demand is observationally equivalent to long run declining demand due to improving energy efficiency.Griffin and Schulman make a case that a symmetric specification with a trend toward energy saving technical change is superior.Wing attributes improving energy efficiency to technical progress within industries.I model the direct effects of price on demand as symmetric, and include both a deterministic trend and a lagged dependent variable in the regression.I allow for asymmetric effects of crude oil prices on the world economy, modeling the market as though all asymmetric impacts of price on demand result from asymmetric impacts of price on the macroeconomy.I discuss regime change in the appendix, which contains additional discussion of the econometric methods used to make the estimates.I make no estimate of changing seasonal patterns, but include seasonal dummy variables in the regressions, so forecasts based on the estimates do not reflect outdated seasonal patterns in a static sense.The task of profit-maximization is unusual for OPEC because its cartel equilibrium prices are not unique.5,Gately and Huntington, Hamilton and others have found that increases in the price of oil lower world GDP, and, therefore, demand for oil, more than decreases in price raise them.OPEC as a whole faces a kinked demand curve because of this asymmetry."The kink, in turn, implies a vertical discontinuity in OPEC's marginal revenue curve.Within a corresponding range of prices, decreases in production raise price, but reduce revenue by more than they reduce cost, and increases in production lower price, but raise revenue by less than they raise cost.A change in price, and quantity, will change the location of the kink, shifting the gap in marginal revenue.This is shown in Fig. 3, in which, from to, but marginal cost passes through the gap in marginal revenue both before and after the change, so both combinations are equilibria.P′ is the highest possible equilibrium price in Fig. 3 because, at that price, marginal cost equals the lower boundary of the gap in marginal revenue.An increase to a price above P′ would cause marginal revenue to exceed marginal cost for both increases and decreases in production, incenting OPEC to increase production, lowering price.P′ is not stable with respect to decreases in price, but no higher prices are equilibria.P, a more representative OPEC equilibrium price, is not stable with respect to either increases or decreases, and this is at the heart of my argument that the asymmetry has a de-stabilizing effect on price.There are reasons to believe that the asymmetry in demand would stabilize the price of crude oil."OPEC's disincentive to change production at any point is stronger with the asymmetry because the loss in revenue exceeds cost savings when production falls, and cost increases exceed any gain in revenue when production rises.Also, shifts in cost and horizontal shifts in demand cause less instability in price with a kinked demand curve than with a smooth demand curve.With a kinked demand curve, a modest shift in marginal cost will not change the profit-maximizing quantity of production and sales, or price.A proportional horizontal shift in demand will also cause no change in price.A parallel horizontal increase in demand will cause no change or an increase in price, while such a shift always increases price when there is no kink in demand.6,That said, I still argue that the asymmetry has de-stabilizing effects on price that exceed its stabilizing effects, overall.While the asymmetry gives OPEC stronger incentive not to deviate from any equilibrium price/production combination, there is a range of equilibria from which OPEC has such a strong incentive not to deviate.Many things in the global oil market beyond the control of OPEC can change.When they do, OPEC has no incentive to offset their impacts on price; prices both before and after the changes are equilibria.Even unexpected deviations in OPEC production itself may not incent offsetting changes in Saudi or other OPEC production.“Cheating” on quotas, disruptions in production due to war or civil conflict, and the like do not necessarily motivate any stabilizing correction by the cartel, and may motivate further destabilization due to countercyclic profits."OPEC's apparent clumsiness may result, in part, from a multiplicity of equilibria associated with asymmetric effects of changes in the price of oil on the macroeconomy.A vertical shift in net demand to OPEC causes a greater change in price than it would with symmetric demand.Marginal cost passes through the discontinuity gap in marginal revenue before and after a vertical shift in demand, incenting no change in output, leading to a change in price equal to the full vertical shift in demand.I show this in Fig. 4.Demand shifts from D to D′, and price shifts by the same amount, from P to P′, with no change in quantity produced.In contrast, with a smooth demand curve, an increase in demand would lead to an increase in price less than the full vertical shift because the producer would increase output as marginal revenue intersected marginal cost at a greater quantity of output.According to Shepherd, a change in GDP per capita is best represented by a vertical shift in demand.Though a change in population is better represented by a horizontal shift, if the macroeconomy is less stable than costs of extracting crude oil and world population, shifts in cost and demand taken together will cause greater instability in the price of crude oil with a kinked demand curve than with a smooth curve.Table 1 uses Eq. in the dynamic process guiding GDP and price, described at the bottom of the table, with a symmetric specification of demand.With an asymmetric specification, from Eq., dP = dG because dD = 0, as in Fig. 4.An initial change in GDP occurs between t = 0 and t = 1.As price fluctuates, decreases in GDP with the asymmetric specification exceed those with the symmetric specification, eventually stabilizing GDP at a lower level with the asymmetric than with the symmetric specification.This is true given either an initial increase or an initial decrease in GDP.OPEC maintains price above marginal cost by constraining expansion of its capacity.Since capacity takes time to expand, members can punish one another for expansion of capacity by responding in kind before any such “cheating” on agreed-to levels of capacity results in revenues for those who would cheat.Uncooperative expansion of capacity is easy for the cartel to deter.Consequently, OPEC may find itself with little excess capacity to produce and, like other producers in the energy industries, it may have a “hockey stick” shaped marginal cost curve.In Fig. 5, the flat, lower portion of the hockey stick is the “paddle”, and the steep, upper portion is the “handle”."OPEC's marginal costs may be low in the short run because they do not include any capital costs, or high because they include quasi-rents on limited production capacity; OPEC's marginal costs may change little with production when capacity is slack and rapidly with production when capacity is tight.The range of equilibrium prices narrows as the marginal cost curve becomes steeper.As price falls, marginal cost will come to exceed marginal revenue over increases in production at a higher price if marginal cost also rises.As price rises, marginal revenue will come to exceed marginal cost over decreases in production at a lower price if marginal cost also falls.The destabilizing effect of the asymmetry on price is greater when OPEC has excess capacity.This is shown in Fig. 6.Equilibrium price and quantity are initially under all alternative assumptions: asymmetric demand curve D; symmetric demand curve DStraight; increasing marginal costs MCHS, the “handle” of which represents scarce capacity; and constant marginal costs MCFlat, which represent excess capacity.,With MCHS, if demand shifts from D to D′, price changes to P′, but if demand shifts from DStraight to DStraight′, price changes to P″, so the effect of the asymmetry on the change in price is − = P′ − P″.With MCFlat, if demand shifts from D to D′, price also changes to P′, but if demand shifts from DStraight to DStraight′, price changes to P‴, so the effect of the asymmetry is P′ − P‴.Therefore, the increase in marginal cost changes the effect of the asymmetry by − = P‴ − P″ < 0.With asymmetric demand, price rises to P′ whether marginal costs rise or not, but price rises more with symmetric demand when marginal costs are increasing because the increasing marginal costs deter increases in production.Since marginal cost increases more rapidly with production at times of tight capacity, the destabilizing effect of the asymmetry on price is greater when OPEC has excess capacity.In Fig. 6, tight capacity and rapidly increasing marginal costs attenuate the part of an increase in price resulting from asymmetry in demand.Correspondingly, excess OPEC capacity and stable marginal costs may accentuate the part of a fall in price resulting from asymmetry in demand."High prices in the early 1980s led to “demand destruction”, and, by 1986, OPEC's excess capacity was high, so its marginal costs were low and stable.While it is harder for OPEC to cooperate in the presence of excess capacity, the asymmetric effects of price on the macroeconomy and demand for crude oil may help explain the price collapse of 1986 as advantageous for OPEC, and not merely reflecting a breakdown in cooperation within the cartel.There is less temptation for OPEC members to cheat on production quotas when capacity is tight because potential profits from increasing production are limited by the capacity of potential cheaters.7,Before the oil price shock of Summer, 2008, non-OPEC production had been falling despite rising prices, so OPEC capacity was tight, and the cartel restrained production.8,If one member of OPEC suddenly ceases production due to war or civil conflict, excess capacity may shrink considerably at any given price, incenting remaining members of the cartel to restrain, if not decrease, production.After Libyan production declined in February of 2011 during the fall of the Gadhafi government, Saudi Arabia refused to raise production.Total OPEC production remained unchanged in Quarter I from the previous quarter and fell 554 Mbbl/d from Quarter I to Quarter II.Price rose 11% in Quarter I and 13% in Quarter II.9,Imagine that Fig. 4 applies to all members of OPEC other than Libya, and that the upward shift in demand occurs when production is disrupted in Libya.These were shocks to price originating in production, and not only did they reduce GDP over time, but they caused energy bills to vary countercyclically, destabilizing consumption.Though not the predominant origin of volatility in price, shocks to production represent an especially strong reason to make volatility in price less profitable for OPEC.Dt is quarterly demand for oil worldwide in billion barrels per year, qti is a dummy variable indicating that time t is also Quarter i. Pt is the real price of crude oil in 2005$/bbl, Gt is world gross domestic product, t is time in quarters, and εtD is potentially heteroskedastic and correlated with εtS from Eq.The term δP captures the effect of contemporaneous price on the quantity of oil demanded worldwide.I treat quantity demanded as a linear function of price so that the price elasticity of demand increases with price.Numerous substitutes for crude oil, including potential conservation measures, exist, but they have only become competitive as oil prices have reached new highs.These may include ethanol from cellulose as well as corn, biodiesel, coal to liquids, natural gas to liquids, potentially huge reserves of natural gas hydrates below the ocean floor, and denser urban design.Both substitution and income effects favor a linear specification.10,Statistically, price performs better than log price in specification tests, though qualitative results are similar.I include a deterministic trend to account for increasing efficiency in the use of oil.The ratio of world oil consumption to GDP in 2010 was less than half what it was in 1973.While some of this resulted from substitution of other fuels for oil, most did not.In 2010, worldwide consumption of primary energy of all kinds per unit of GDP was a fourth what it was in 1980.11,According to Atkins and Jazayeri, inclusion of the deterministic trend obviates the need to model the direct effects of price on demand as asymmetric.According to Griffin and Schulman, the trend is superior.According to Wing, it is important, at least in the U.S.I model persistence in demand for crude oil using quantity demanded of crude oil lagged one quarter Dt − 1.Short term persistence results from rigidity in planning for the use of oil-specific capital and durables, such as travel planning.Longer term persistence results from the sunk costs of oil-specific physical and human capital and durables, beginning at the refinery and continuing downstream to such things as gasoline-powered vehicles and auto-oriented urban design."Iraqi and Kuwaiti production fell during and after the Gulf War, the International Energy Agency tapped strategic stocks, expectations changed significantly and frequently, and short term volatility in price increased at the time of and following Iraq's August 1990 invasion of Kuwait.Allowing a temporary shift in intercept during this time improved the results of tests of specification and stationarity.I use log price so that the price elasticity of non-OPEC supply decreases in quantity supplied.While many sources of crude oil may be available, there is substantial variability in the cost of finding and extracting them.The decreasing short term elasticity reflects the effect of increasing costs of extraction as existing sources are used more intensively.Decreasing long term elasticity reflects increasingly costly sources being exploited.The U.S. interest and exchange rates reflect the importance of the U.S. dollar to commodity markets in general and that for crude oil in particular.Both organized exchanges and contracts for crude oil typically quote prices in dollars."The dollar was the world's “petro-currency” throughout the sample period and remains so today.The nominal rate of interest on dollar-denominated securities essentially measures the degree of inflationary pressure for the U.S. economy: The “real interest rate” component measures the extent to which the Federal Reserve restricts credit to prevent inflation, and the remaining component represents the extent to which it accommodates inflation.A change in the exchange value of the dollar can affect the incentive to produce under non-indexed lease agreements and forward contracts, but it will only shift demand for crude oil between the U.S. and other countries, without having much effect on world demand, since the U.S. economy is about as oil-intensive as the world economy as a whole.Empirically, if I include it − 1 and Xt − 1 in the demand equation, they remain statistically significant in the supply equation, but they are not statistically significant in the demand equation.Lagged cumulative supply reflects rising costs of exploration, development, and extraction as less expensive sources of crude oil are exhausted, while the time trend captures the effect of advancing technology, which lowers the costs of exploration, development, and extraction.Lagged quantity supplied reflects persistence in supply resulting from the presence of physical and human capital specific to exploration and extraction of crude oil.Wirl advocates for dynamic specification of both demand and non-OPEC supply.The price of crude oil affects world GDP which, in turn, affects demand for crude oil.OPEC must account for the effects of its pricing and production on the world economy when deciding what will be most profitable for the cartel.A good deal has been written about asymmetry in the response of the macroeconomy to changes in the price of oil.12,Increases in price seem to damage the economy more than decreases help.Reasons include nominal rigidities, allocative disturbances and uncertainties, income and liquidity effects, and large transfers of wealth.Regarding the former, much of the world uses petroleum products to travel between home and work, so oil and labor are complements in the production of a great many goods worldwide.Normally, prices of complements move in opposite directions, but, because wages are sticky in the downward direction, increases in the price of oil tend to cause unemployment, rather than lower wages, while the higher employment that decreases in the price of oil bring about is moderated by increases in wages.Mork relates this argument without specific reference to the importance of oil in transporting workers to their jobs.To capture the asymmetric effects of oil prices on the macroeconomy econometrically, I specify increases and decreases in the price of crude oil separately, using first differences in the log of price to explain first differences in the log of GDP.I allow for response in the first difference in log GDP to contemporaneous increases and decreases in log price, and one lag in the log of price.Longer lags of log price and its first difference affect growth in GDP through lagged dependent variables.Table 2 shows the basic data, various transformations of which are used to make the estimates.13, "The footnotes to Table 2 explain most of the columns, but Column A is the U.S. refiners' acquisition cost of imported crude oil, tabulated and defined by the U.S. Energy Information Administration as the “world price” of crude oil in its analyses and forecasts.I assume that the world market for crude oil is integrated.14,Beginning in early 2011, oil prices in the North American interior, as measured by the WTI New York Mercantile Exchange benchmark, fell relative to their historic relationship with oil prices elsewhere, but I do not use data from after 2010 in the regressions.Column H is the percent drop in world output of crude oil caused by war or civil conflict.The episodes of war and civil conflict include the November 1973 Yom Kippur war, the November 1978 onset of the Iranian Revolution, the October 1980 onset of the Iran-Iraq war, the August 1990 Iraqi invasion of Kuwait, and the U.S. invasion of Iraq in March of 2003.Real crude oil prices are derived using Columns A and B.I derive quarterly world GDP by applying quarterly variation in U.S. GDP, Column F, to annual world GDP, Column E. Both U.S. GDP and U.S. consumption of crude oil declined from a fourth to a fifth of the worldwide totals during, and the oil intensity of the U.S. economy was about the same as that of the world economy throughout, the sample period.I use world GDP rather than OECD GDP as a measure of income.The non-OECD share of world consumption of crude oil has increased from 25% to 50% since 1970.15,Economic growth caused demand for oil to grow especially fast in some non-OECD countries such as China and India.I use purchasing power parity because market exchange rates exhibit variation that does not seem to reflect the real incomes of consumers of crude oil and those affected by the market for it.National GDPs calculated using market exchange rates can deviate from purchasing power parity for extended periods.16,I assume that the quantity of crude oil demanded equals the quantity produced, as measured by world production of crude oil.The quantity demanded, then, includes that amount added to inventory.The terms et − 3S and et − 12S in Eq. are explained in the appendix.The R2 = 0.9756 for Eq. and 0.9927 for Eq.All of the coefficients are significantly different from zero at the 95% level except the dummy variable for Quarter 2 in Eq. and the deterministic trend, which is significant at the 90% level, and the constant in Eq.In Eq., price, GDP, the time trend, lagged demand, and the dummy variables for Quarters 3 and 4 are significant at the 99% level.In Eq., log price, the lagged rate of interest, lagged supply, and the dummy variables for Quarters 2 and 4 are significant at the 99% level.All of the coefficients have the expected signs, with the possible exceptions of those on the Gulf War dummy, Mt90III92IV, and the lagged rate of interest, it − 1.My explanation for the former is that the period in question encompasses several quarters beyond the war itself, a number of things were in flux at the time, and price is held constant by its representation in the regression, so any non-OPEC response to higher prices at the time should not be reflected in the coefficient on the Gulf War dummy.My explanation for the latter is that it − 1 is a nominal rate; as such, it may reflect inflationary expectations more than real rates of return on lending.Higher inflationary expectations incent competitive firms to hold oil, while higher real rates of return on lending incent them to extract and sell oil.Two-step GMM estimates of Eq. are shown in Table 3.Of the 11 regressors, including the constant, seven have coefficients that are significantly different from zero at the 95% level, and six at the 99% level.The most notable exception is decreases in log price, whose coefficient is nearly statistically significant; this is not due to lack of precision in estimation, as its standard error is hardly higher than that of the coefficient on increases in log price, but to its point estimate being smaller in absolute value.,Oil prices are important to the macroeconomy, but are not all important.This is shown in Fig. 7, which plots the residuals associated with the estimates of Eq. shown in Table 3.The exceptionally low residuals in 1980:II and 2008:IV were preceded by exceptionally high residuals in 1978:II and 2006:I, respectively; oscillations which may represent the crests and troughs of a “business cycle” apart from movement related to the price of oil.The deepening of the “Great Recession” in the fall of 2008 is visible, a result of a collapse in private lending, but the oil price shock of the previous summer was a contributing factor.Evaluating Eq. in 2008:II gives an estimate of the impact of the large increase in price going into that summer.Price in 2008:II was Pt = $106.31/bbl, and in 2008:I was Pt − 1 = $84.05/bbl."The estimated elasticity for an increase in price is − 0.2406, and the price change was 24.7%, so the estimated effects over time on world GDP of that largest ever quarter-to-quarter increase in the real price of crude oil is 5.94% of one quarter's worth of GDP.Between 2008:III and 2008:IV, price fell from $102.40/bbl to $47.43/bbl, a 73% drop.The estimated elasticity for a decrease in price is − 0.0763, implying a gain in quarterly GDP of about 5.57%.In my constructed data, world GDP grows at about 3.33% annually, so these estimates suggest that, taken as a whole, the oil shock of 2008 set the world economy back about a month and a half.In this historic scenario, price fell more than it rose, but the overall impact on the world economy was negative because of the asymmetric effects of changes in the price of oil.The supply curve of a monopolist does not exist.OPEC is not a monopolist in the literal sense, but its market power in the world market for crude oil is not much rivaled, so, as a whole, its profit-maximization problem is similar to that of a monopolist.According to Kaufmann, non-OPEC producers are price-takers, and OPEC behavior fits a variety of non-competitive models.Jin et al. model the world oil market as a Stackelberg game with a dominant firm and competitive fringe producers.OPEC, then, whose market share is around 40%, does not interact strategically with non-OPEC suppliers, with the possible exceptions, ignored here for simplicity, of the governments of Russia and, less likely, Mexico and Norway, whose market shares are around 13%, 3%, and 2%, respectively.OPEC takes account of its influence on non-OPEC production when deciding its own production, but non-OPEC producers do not take account of their influence on OPEC in deciding their production.18, "Since OPEC's market power is substantial and largely unrivaled, it decides the world price of crude oil as it decides its own production.Table 4 shows estimated marginal operating costs by region.Regions where OPEC countries are located are highlighted.Even at $45/bbl, price is well above marginal operating cost in OPEC countries, except, perhaps, Nigeria and Venezuela.Though competitive producers whose future marginal costs increase in current production will produce where marginal cost is less than price, such a large difference between price and marginal cost suggests that OPEC has been successful at restraining production, more by restraining expansion of capacity than by abiding by publicized quotas.Using the estimates in Eq. and Eq. gives the values in Table 5.OPEC may discount future impacts on its revenues of current changes in price.I can account for this by changing δD, as it appears in Eq., to δD/, where r is an appropriate quarterly discount rate, and I will need to make a similar change to account for discounting impacts on non-OPEC supply.Again, OPEC may discount future impacts on its revenues of current changes in price.I can account for this by changing ηC + ηS, as appearing in Eq., to/."Table 6 shows estimated marginal revenue gaps at different prices, ranges of equilibrium prices at different marginal costs, and elasticities of world demand, world GDP, non-OPEC supply, and net demand to OPEC at a price of $100/bbl in 2014:III, assuming that OPEC's real discount rate is zero.21",If marginal cost were increasing in production, the ranges of equilibrium prices would be narrower.One might ask whether the emergence of tight oil has raised the elasticity of non-OPEC supply since the end of the sample period in 2010.Between that year and 2014, the price of Brent grew at an annual rate of 7.9%, and non-OPEC production at 2.3%, for a very rough estimated elasticity of 0.29, similar to the estimate in Table 6.I calculate the discontinuity gap in marginal revenue according to Eq.Marginal revenue over decreases in price is negative for prices below $47.81/bbl, though net demand to OPEC is elastic over increases in price, as expected for a profit-maximizing “monopolist”, at any positive price.The fall in the price of Brent from $111/bbl to $72/bbl between June 30 and November 28 of 2014 did not result from increases in OPEC production.Rather, it resulted from rising non-OPEC production and a weak world economy.That OPEC did not take “corrective” action at its November 27, 2014 meeting by cutting production is broadly consistent with a discount rate of zero and a marginal cost of $30/bbl.With prices below $40/bbl, OPEC again declined to cut production at its December 4, 2015 meeting, consistent with a discount rate of zero and marginal cost between $10/bbl and $20/bbl, which better characterize the Persian Gulf countries than the African or South American members of the cartel."Table 7 is similar to Table 6, but OPEC's real annual discount rate is assumed to be 10%, rather than zero.Elasticities are calculated using discounted changes in quantities, and OPEC behaves as though net demand is less elastic with respect to price, and income.The marginal revenue gaps are lower: At a higher rate of discount, lower marginal costs are required to support any given price.The ranges of equilibrium prices are higher and wider.In the wake of the price collapse of 2014–15, less liquid OPEC members Venezuela, Nigeria, and Iran pressed for higher prices than did the more liquid Arabian countries.That OPEC did not take “corrective” action at its November 27, 2014 meeting by cutting production is broadly consistent with an annual discount rate of 10% and a marginal cost of $20/bbl.That it did not take “corrective” action December 4, 2015 is consistent with a discount rate of 10% and marginal cost not far above $10/bbl.Some of the less liquid OPEC countries have marginal costs significantly higher than this."The impact of a higher discount rate on OPEC's optimal prices is opposite that in a Hotelling or other competitive model of resource extraction, in which a higher discount rate motivates more rapid extraction by individual competitive producers, whose collective impact, then, is to lower current prices.Here, dynamic world demand and non-OPEC supply, represented using the lagged dependent variables, mean that lower production and higher prices cause net “demand destruction” at later times.Because OPEC is not a price-taker, it takes account of this effect, and discounts it at a higher rate in Table 7 than in Table 6.Only with dynamic net demand to a seller with market power is the response of production to an increase in the discount rate reversed from the rise in production that would be observed for either a competitive supplier or a supplier facing static demand.At a price of $100/bbl in 2014:III, this is − 0.2143 for an increase in price and − 0.1223 for a decrease in price.In the very short term, net demand to OPEC is very inelastic.Assuming non-decreasing marginal costs, then, OPEC can collect countercyclic profits by promulgating temporary price shocks originating in changes in production.The asymmetric effect of the price of oil on the macroeconomy implies a range of equilibrium prices for OPEC.The asymmetry, identified by others and supported in econometric estimates made here, also implies that fluctuations in price damage the macroeconomy.I estimate the range of equilibria to be over $40/bbl wide, assuming constant marginal costs."When changes in price occur within this range for reasons beyond OPEC's control, OPEC does not have the incentive to reverse the change that it would with symmetry.High prices that persist for a time, then, do not imply that the world is about to run out of oil, and a period of low prices does not imply that OPEC is moribund as a cartel.From the middle 1980s through the 1990s, OPEC used low prices to rebuild demand that had been destroyed in the recessions, rising non-OPEC production, conservation, and “new urbanism” that were caused by the high oil prices and oil price shocks of the 1970s and early 1980s, which were also perpetrated by OPEC.A growing world economy and declining non-OPEC production lifted prices in the 2000s, a slow world economy and expanding non-OPEC production later brought them down, and OPEC chose to accept both changes, for reasons at least partly explained by asymmetry in net demand.In the middle 2010s, OPEC has again been accepting low prices in order to rebuild net demand, not only by discouraging non-OPEC production, but also by allowing the world economy to recover.As it does, lowering and stabilizing the price of oil will become a more significant challenge.Price-taking sellers and buyers of crude oil should expect unstable prices.If their expectations are adaptive, this analysis may imply no change in their behavior, as they have over forty years of unstable OPEC-era prices from which to extrapolate.Table 8 shows the impact on ranges of equilibrium prices of policies designed to decrease the absolute level, or increase the price elasticity, of net demand to OPEC.I represent any policy designed to lower world demand or raise non-OPEC supply at any given price by lowering the constant term in Eq. or raising that in Eq. by 1 bbl/yr.The former type of policy would include conservation in the use of oil products or development of alternative sources of energy, and the latter subsidization of production of crude oil.I represent a policy of raising elasticity of world demand by multiplying the coefficient on price in Eq. by two and raising the constant term by 1 bbl/yr.Similarly, I represent a policy of raising elasticity of non-OPEC supply by multiplying the coefficient on log price in Eq. by two and lowering the constant term by 1 bbl/yr.22,Either operation could represent a policy of using strategic stocks to stabilize prices.The effect of all of these policies is to narrow and lower the range of equilibrium prices, but increasing the responsiveness to changes in price of either demand or supply is more effective at stabilizing and lowering the price of crude oil than conservation or increases in supply.Why?,Starting from any price, if an increase in price lowers quantity demanded more, OPEC has less incentive to raise price, and if a decrease in price raises quantity demanded more, OPEC has less disincentive to lower price; thus, greater elasticity of net demand to OPEC lowers price.The purpose of the foregoing analysis and its policy implications is to raise economic growth and employment by explaining and stabilizing, respectively, unstable prices for crude oil.Private actors have incentive to buy oil low and sell it high, which helps to stabilize price, but not to the extent that they internalize the macroeconomic damage caused by instability in price.To raise the elasticity of net demand to OPEC, tax preferences might be directed toward expanding private storage facilities, and severance taxes might be applied to volume, rather than value.Use of government stocks, like the U.S. Strategic Petroleum Reserve, to increase sales when price is high and purchases when price is low could help stabilize price.Expanding those stocks, and simply being prepared to use them, could do the same.China added substantially to its strategic stocks under low prices in early 2016,23 but its stocks may still be too small to prevent a damaging price shock later: In Vatter, I estimate that world GDP is maximized when OECD stocks are six billion barrels, and a “reserve manager” deters price shocks by standing ready to buy low and sell high in amounts that internalize the macroeconomic effects of changes in price.Use of strategic stocks, compared to other policies that raise the price elasticity of demand to OPEC, has the advantage that it can target instability in price originating in changes in production, which destabilizes consumption overall.Accumulation of strategic stocks of petroleum would raise price; eventual liquidation would lower it.Grain converted to ethanol could be used similarly.The Brazilian economy shrank only 0.2% during 2009, compared to 4% for the European Union and 2.5% for the United States."Brazil requires gasoline to be blended 3 to 1 with ethanol, and a fourth of Brazil's light vehicles could run on either gasoline or ethanol entirely.Ferreira-Tiryaki estimates that the introduction of these flex-fuel vehicles increased the elasticity of demand for both gasoline and ethanol in Brazil.Generally, making ethanol content more responsive to the price of oil could help stabilize it.Due to the complementarity in the production of many goods between labor and petroleum products used for commuting, policies that reduce the oil-intensity of commuting should help mitigate the damage to GDP, even locally, from any fluctuation in price.They would also reduce demand for OPEC oil.The urban growth boundary in Portland, Oregon, which increases urban density and reduces commuting distance, is an example.Congested refinery capacity can make refinery demand for crude oil unresponsive to changes in price.In Fig. 8, price rises from P to P′, but refinery input does not fall.One may surmise, then, that there are macroeconomic benefits to maintaining excess refining capacity.Tables 6 and 7 show that a higher discount rate raises and widens the range of equilibrium prices for OPEC."If conservation, non-OPEC production, strategic stocks, ethanol, or excess refining capacity drive oil prices so low that relatively high-cost OPEC governments experience fiscal distress, as in Libya, Nigeria, and Venezuela in 2015–16, these policies may have the unintended consequence of raising OPEC's discount rate, and, therefore, causing a rebound in the price of oil.OPEC governments and producers whose authority or property rights are insecure will have higher discount rates.Thus, security of governmental authority and property rights over oil and oil-producing capital in OPEC countries, ceteris paribus, should lead to lower and more stable prices for crude oil, and a more stable and prosperous world economy."Because OPEC production responds in the opposite way from non-OPEC production to a change in the discount rate, the effectiveness of monetary policy at accelerating or slowing macroeconomic activity depends on OPEC's market share. "The larger OPEC's market share, the more oil production will fall, and the more the price of oil will rise when monetary authorities increase the rate of interest, and the more that increase in the rate of interest will slow the world economy. "Similarly, the larger OPEC's market share, the more a fall in the rate of interest will stimulate the world economy.While world GDP is the primary welfare criterion used here, the optimal policy mix may also depend on other criteria."Policy options for raising and stabilizing GDP by lowering or stabilizing the price of crude oil include lowering demand for crude oil, as through conservation, subsidization of renewable energy, or denser urban design, raising non-OPEC supply of crude oil, making net demand to OPEC more price-elastic, as through accumulation and trading of strategic stocks, subsidizing private storage, applying severance taxes to volume, rather than value, requiring engines to be fully convertible between gasoline and ethanol, subsidizing chargeable hybrid vehicles, or expanding refining capacity, and lowering OPEC's rate of time-preference, as through looser monetary policy or stabilization of OPEC governments.If, for example, climate change is also a concern, lowering demand may be preferred.If consumption-smoothing is of interest, accumulation and trading of strategic stocks may be preferred, as their use could be targeted to mitigating changes in price caused by changes in production, rather than by exogenous changes in GDP.If there is concern that relatively friendly or humane OPEC governments are under threat, stabilization of such governments may be a preferred policy option.Further research in this area could involve attempts at quantification.Optimizing OECD stocks using asymmetric demand like that estimated here might improve the estimates I made using a symmetric specification in Vatter.Estimating the model with Russia included in OPEC could be instructive."One might also investigate OPEC's costs, perhaps updating the data Celta and Dahl used to make their 2000 estimate, and experimenting with functional forms that are convex in production.Breaking demand and supply data down by country and estimating a net demand relation similar to that estimated here using cross-sectional time-series data might also produce more robust estimates.
Asymmetric effects of oil prices on the macroeconomy imply multiple equilibrium prices for OPEC. I estimate world demand for crude oil, non-OPEC supply, and the effects of changes in price on world GDP using quarterly data covering 1973 to 2010. If OPEC's marginal cost is $20/bbl in 2014:III, and its discount rate is zero, estimated equilibrium prices are $44–88/bbl. Multiple equilibria incent OPEC to tolerate unstable prices, which, because of the asymmetry, lower world GDP. Both policies that increase responsiveness to price and policies that lower net demand to OPEC narrow and lower the range of equilibrium prices, but the former are more effective at doing so. OPEC responds to changes in the discount rate in the opposite way from competitive producers, so policies that secure oil-related property rights in OPEC countries and other policies that lower OPEC's discount rate narrow and lower the range of equilibrium prices. Monetary policy is more effective at accelerating or slowing macroeconomic activity the larger is OPEC's market share.
404
High power phased EMAT arrays for nondestructive testing of as-cast steel
Diagnostic assessment of internal product quality during the continuous casting of steel is currently limited to offline and largely destructive methods, such as acid etching followed by sulphur printing , chemical analysis of drilled core samples and optical emission spectroscopy methods .There is a requirement from industry to perform product quality tests non-destructively and continuously during the casting process to allow feedback to the casting operators."This could, in principle, mitigate the development of internal defects, which both reduce the steel's sale value and in some cases present safety concerns .Detection of internal defects during the casting process presents a number of difficulties for conventional non-destructive evaluation techniques; the high operating temperatures, surface roughness and continuous movement of the sample necessitate the consideration of a non-contacting approach.The thickness of a cast steel slab lies in a range from 12 to 30 cm, which is sufficient to preclude the consideration of practical radiographic measurements, and to perform active thermography through such a sample thickness would be impractical, due to the variable and uncontrolled ambient temperatures of the casting environment and the likelihood of false indications arising from surface oxide scale.Ultrasound measurements have been identified as a realistic prospect of probing the surface and bulk of a cast slab and are the subject of previous studies on cast steel diagnostics , but there still exist a number of challenges when attempting to use acoustics.Namely, the slab itself is relatively thick and contains inhomogeneous and relatively large grain structures when compared to the expected dimensions of a casting defect.Hence attenuation of ultrasound signals, in particular the higher-frequency signals that have scattered from defects, will reduce detected signal amplitudes significantly.Additionally, previous studies have demonstrated that ultrasonic attenuation in metallic samples increases at high temperatures .Non-contacting methods of ultrasound generation are well-established , but the problem of non-contact measurements during continuous casting requires special considerations.The high sample temperatures of up to 1100 °C potentially make water jet coupling of piezoelectric transducers impractical , and the large impedance mismatch between the air and the steel sample precludes the use of air-coupled transducers .Ablative laser generation of ultrasound in steel billets during the casting process has already been demonstrated, and generates sufficient ultrasound wave amplitudes for both surface defect characterisation and possibly bulk wave measurements ."However, laser sources are relatively expensive, high-power laser beams present implications for the steel mill's safety regulations, the surface ablation pits can interfere with other visual inspection systems in place at the steel mill and large surface coverage interferometric detection of ultrasound waves using lasers is difficult in optically-rough and moving samples .Electromagnetic acoustic transducers have been used as ultrasonic detectors in conjunction with ablative laser generation sources for surface measurements of continuously cast steel billets , and so represent one possibility for performing bulk diagnostic tests."The low cost and minimal requirement for adaptations to the steel mill's safety protocols makes an entirely EMAT-based system attractive, but their poor transduction efficiency presents challenges in obtaining a practicable signal-to-noise ratio .The work presented here concerns the development of an EMAT phased array concept to overcome this inherent drawback of EMATs.The EMAT generator devices presented in this work consist of an inductor coil driven with a high amplitude dynamic current.Such devices have been demonstrated in previous studies to be relatively efficient bulk wave generation sources , and should in principle be more industrially-robust than conventional EMAT designs, since there is no requirement for an electromagnet or for active cooling of a permanent magnetic material to maintain a sensor temperature lower than the Curie point.Inspection of Fig. 1 indicates that the polarity of both the induced eddy currents and the dynamic field lines will reverse when the current in the driving coil is reversed, leading to exclusively repulsive mechanical forces normal to the sample surface at twice the frequency of the applied driving current .EMAT generation relies on the scattering of conduction band electrons from metal atoms to impart momentum into the metallic lattice; this is an inefficient process, due to the small electron-atom mass ratio.This contrasts with EMAT detection, which is a more efficient process, since sample motion is inherent to the incidence of an acoustic wave.The motion of the conducting sample in an applied magnetic field induces dynamic currents in the sample, which themselves induce a measurable potential difference in the detection coil.In detection, a static bias magnetic field is always required, usually supplied by a permanent magnet .This usually means that a coil-only EMAT cannot act as a detector.The inherent inefficiency of electromagnetic ultrasound generation means that EMAT measurements typically suffer from poor signal-to-noise ratios.This issue is compounded by the expected low signal amplitudes arising from the cast steel sample grain coarseness and high temperatures discussed in section 1, and hence design considerations are required to improve the signal amplitude of an EMAT-based system.One approach that can be taken to improve the signal-to-noise ratio of a measurement is to utilise a phased array to increase the signal amplitude through linear superposition; if the ultrasound signals are summed coherently, the resulting total signal amplitude increases, whilst any stochastic noise in the measurement sums incoherently.Enhancement of EMAT sensitivity by the geometric focusing of shear waves has been reported previously, however the approach taken relied on toneburst current excitations, which are more limited in power than the pulsed currents described in this work, and the dependence on geometric focusing prevented dynamic beamforming .The novelty of the work described here is the development of a high power EMAT phased array, designed specifically for the inspection of thick, attenuative industrial samples.The work presented here describes the development of a phased EMAT array generation system to enhance signals transmitted through the full thickness of as-cast steel slab samples.The driving circuit is similar to the driving electronics described by previous studies describing the development of a coil-only send-receive EMAT , but is capable of achieving much higher current amplitudes and hence more intense ultrasound generation, since the magnitude of the self-field Lorentz force scales with the square of the excitation current .Even higher current amplitudes have been reported for a single coil using a spark-gap discharge driving circuit, although this is not as practical as a solid state switching method .The solid state switching for each channel allows for accurate and reliable application of phase delays, which is essential for control of the phased array beam characteristics.The commercial software package PZFlex was used for all following finite element calculations.PZFlex implements an explicit time domain integration algorithm for solving dynamic elastic and acoustic fields.Further details relating specifically to the finite element solver can be found in Ref. .A high power EMAT pulser system consisting of four independent channels with programmable time delays was developed for EMAT array measurements on cast steel samples.Prior to the development of an experimental EMAT phased array transducer, finite element models were used to determine optimal array parameters."Compared to typical commercially available piezoelectric phased array systems, which are capable of driving up to 256 independent channels , the number of output channels available on the phased EMAT pulser's driving electronics is low.Typically when designing a phased array of any kind, it is beneficial to adhere to the diffraction limit and maintain an element separation equal to, or less than, a half-wavelength.With such a limited number of elements, however, the aperture would be small when adhering to the diffraction limit and hence the expected beam characteristics would be poor.Moreover, the inherent inefficiency of EMATs necessitates relatively large transducer footprints for practicable signal-to-noise ratios, making adherence to the diffraction limit difficult.A finite element study was therefore conducted to ascertain the best array parameters to achieve both a narrow beamwidth and sufficient sidelobe suppression.Analysis of the self-field Lorentz generation mechanism indicates that the coil-only design can be approximated as a rectangular piston source .Each EMAT element was therefore modeled by applying a uniform, time-varying, pressure profile across the relevant surface nodes in the model.Analytical modeling of the self-field mechanism indicates that the Lorentz force is proportional to the square of the driving current, leading to a doubling of the frequency content in the case of harmonic time dependence.The square of a half-cycle of a sine wave with a period of 2.0 μs was therefore chosen as the driving function for the pressure load in the model, to approximate the temporal pressure variation supplied by a coil-only EMAT driven by the current profile shown in Fig. 3.A pressure amplitude of 40 MPa was selected as an order-of-magnitude approximation as determined from semi-analytical modeling.Internal defects of interest in cast steel, such as segregation defects and associated cracking, are likely to lie along the centreline, which in a 22.5 cm thick slab is at a depth of approximately 11 cm below the sample surface .Phase delays were therefore applied in accordance with equation to model the focusing of an incident longitudinal ultrasound pulse at a depth of 11 cm.The EMAT pulsing system available for experimental use has four channels, and so for meaningful comparison of the model with experimental results, a four-element EMAT generator was modeled in this way.The EMAT generation array can be characterised in terms of two defining parameters; the element separation and the element width.The aim of the study was to obtain the optimal values for these parameters to achieve a narrow beam width and high directivity in the generated beam.A series of simulations were run in which the element width was kept constant at 1 mm and the element separation was varied between 1 and 20 mm.The model output was a two-dimensional grid of pressure history data for each node in the simulation.The beam profile was obtained by defining a semi-circle with radius 11 cm about the centre of the array and plotting the maximum absolute values from the pressure histories of the corresponding nodes as a function of angle from normal incidence.The beam profile was then parameterised in terms of the 3 dB beam width and in terms of the logarithmic ratio of integrated beam amplitude within the 3 dB beam width to integrated amplitude without of the 3 dB beam width.The first parameter serves as a metric for comparing the directivity of the main beam lobe; a narrower beam width gives a more localised high pressure region, which is beneficial when aiming to separate defect indications that lie laterally close to each other.The second parameter serves as an indication of the relative amplitude of side lobes; if the ratio is low, then more of the beam energy is directed outside of the main beam width and in separate lobes that are directed away from the intended target region, leading to regions of high localised amplitudes other than the intended focus, and therefore potentially confusing attempts at defect localisation using a focused beam.The results from this series of simulations are displayed in Fig. 6.The overall observed trend is that an increased element separation reduces the 3 dB beam width, which is desirable, though at the expense of increasing power distributed through side lobes.This is to be expected; the larger array aperture leads to a more well-defined focus, however since the number of elements in the array remains constant, each increase in aperture size moves the element separation further away from the diffraction limit and hence increasingly large side lobes are observed.This can be to an extent mitigated by choosing a suitable element width.Fig. 7 displays the beam characteristics modeled by choosing an array separation of 6 mm and varying the element width between 0.5 and 5.5 mm.It is observed that wider elements reduce side lobe generation for a marginal decrease in beam width.Intuitively, this can be explained through the consideration of each element as a normally-acting piston source.As the element width increases, proportionally more of the energy is directed downwards compared to a smaller element, for which contributions from the piston edge are proportionally greater and lead to non-normally-incident wave generation.A full optimisation would require modeling array parameters throughout the two-dimensional parameter space, however it is clear from modeling with the fixed width and separation values that the choice of a large aperture with large elements leads to smaller beam widths with suppressed side lobes.The data in Fig. 6 show that large increases in element separation beyond 10 mm produce diminishing returns in terms of beam width, with the minimum achievable beamwidth being approximately 10°.It was therefore decided that an EMAT array with element separation of 6 mm and with element widths of 4 mm provided a suitable compromise between narrow beam widths and suppressed side lobes.This choice of design gives a 3 dB beam width of 20°, which corresponds to a lateral size of 8 cm in the centreline region.Using the optimised array parameters obtained in section 3.2.1, it is possible to determine the expected improvement in signal amplitude that results from phased array generation.A finite element model was constructed in which a four-element EMAT array on the upper surface of a 22.5 cm thick steel block focused an incident longitudinal ultrasound pulse on the opposing surface directly underneath it.The driving function and mesh density were as described in section 3.2.1.An EMAT detector is sensitive to surface particle velocity, and hence in order to model the signal as detected using an EMAT, the velocity vector history was recorded for each node in the simulation grid.Nodes on the lower surface were chosen which corresponded to an EMAT detector, with a footprint described by Fig. 10, placed directly opposite the generation array.Out-of-plane velocity components in the bias field shown in Fig. 8 generate opposing eddy currents under each half of the detection coil, and hence out-of-plane particle velocities at surface nodes corresponding to the detection coil can be directly summed to obtain a proxy of the voltage signal as measured by the inductor coil.The out-of-plane bias field components do not have opposite polarities under each half of the coil, however, and so for a velocity vector that lies in the plane of the surface, unidirectional eddy currents are generated, leading to the induced currents in the detection coil canceling each other.In-plane particle velocities at the coil surface nodes were therefore summed over each half of the coil and then subtracted to account for this cancellation effect.The resulting values give a measure of the calculated relative amplitude of the expected EMAT signal, however the numbers are not directly comparable to experimental measurements without a full model of the EMAT detection device.This is an unnecessary complication due to the non-trivial field geometries arising from the permanent bias field and its interaction with the steel sample, the dependence of eddy current densities on sample properties and the degree of mutual inductance between the detection coil and the sample.Instead, it is sufficient to compare the difference in amplitude between similar models of a single EMAT element and a phased array to determine the expected signal enhancement from using the phased array approach.The resulting velocity histories, summed over the appropriate nodes, are shown for the cases of a single EMAT generation element and generation by a phased array in Fig. 9.The difference in the peak-to-peak amplitude of the incident longitudinal pulse is a factor of 3.7.This result is to be expected, since it is approximately equal to the number of extra elements applied."A coil-only EMAT generator predominantly excites mechanical forces that lie out of the sample's surface plane, and hence lead primarily to longitudinal wave generation.Efficient detection of these transmitted longitudinal signals therefore requires an EMAT design that is sensitive to out-of-plane particle motion, and hence requires a static bias field with significant in-plane components."This is relatively difficult to achieve, since the permanent magnet supplying the bias field must lie above the sample surface and because the in-plane magnetic flux density falls rapidly with distance from the magnet's edge.Most longitudinal EMAT designs therefore involve winding an inductor coil around the edge of a permanent magnet, where there are significant parallel and perpendicular components to the field.Such a design leads to a large parasitic inductance in the coil, however, and the small area over which there are parallel field components leads to relatively weak received signals."For the application of bulk wave measurements in thick steel casts, it is important to optimise the detection EMATs, since the sample's thickness and high attenuation leads to small detectable signals.Newer EMAT designs have considered the positioning of a flat spiral detection coil between magnets of alternating polarity ."The chief advantage of these designs is that they reduce the parasitic inductance in the coil and expose more of the coil's length to the sample, and so lead to more efficient detection of longitudinal ultrasound waves.Since the coils are still placed at the magnet edges, there is still in-plane particle motion sensitivity, and hence these designs are also suitable for detection of shear wave modes.Experimental validation of the amplitude enhancement observed in the finite element modeling was achieved using a four-element high power EMAT pulser and a series of EMAT generation coils wound into a 3D printed plastic template to ensure tight control over element width and spacing."The EMAT array's generation coils were wound into a 3D printed template using 0.14 mm diameter copper wire enclosed in kapton tape.The parameters of the array were chosen on the basis of the finite element study presented in section 3.2.1, and so the width of each individual racetrack coil element was 4 mm, with the distance between the centres of adjacent elements being 6 mm.Phase delays were applied in accordance with equation to the four-element generation array to focus an incident pulse of longitudinal waves on the opposing face of a 22.5 cm thick as-cast steel slab sample.A single edge-field detection EMAT was placed directly opposite; this was connected to an amplifier, which was then connected to an oscilloscope to measure the time-dependent voltage across the detection coil.An A-scan recording of the voltage history resulting from phased array generation was compared to the signal recorded when just a single generation element, placed directly opposite the detection coil, was fired.These measurements were taken with no coherent averaging, but were digitally filtered using a Butterworth bandpass filter with low and high pass bands of 0.1 and 5.0 MHz respectively, and an order parameter of 1.The received signals demonstrate a clear improvement in transmitted signal amplitude by a factor of approximately 3.5 when using a four-element phased array generator instead of a single EMAT.This figure is in good agreement with the expected enhancement by a factor of 3.7 determined from finite element analysis, as discussed in section 3.2.2.The amplitude enhancement demonstrated by the use of a four-channel generation array can be further improved through the coherent addition of the transmitted signal as detected using an array of detection EMATs.Using the design outlined in section 3.3, an array of three detection EMATs was constructed with spacings of 3.0 cm between adjacent elements.The transmitted signal from the coil-only array generating at the opposite end of the sample and focusing at a depth of 11 cm was recorded independently on each detection channel.The transmitted longitudinal pulse signal was identified in the A-scan trace recorded by the central element in the detection array and cross-correlated with the data from each channel to determine the phase separation of the signal as recorded by each element.These phase delays were then applied to the A-scan data from each channel, before summing to produce a single A-scan data set with enhanced amplitude in the longitudinal signal.With sufficient signal-to-noise ratio on detected ultrasound pulses propagated through the full thickness of a cast slab sample, it is possible to begin looking at detection experiments for internal defects.A four-element phased generation array was placed on the upper surface of a 32 cm thick steel sample with a 6 mm diameter side-drilled hole centred at a depth of 16 cm.Phase delays were applied in accordance with equation to focus the incident longitudinal beam on the defect.A detection EMAT was placed adjacent to the generation array to record any backscattered ultrasound signals.Close proximity of the detector coil to the high current generation devices can saturate the amplifier and make detection of reflected ultrasound signals difficult.To an extent, this can be overcome through careful consideration of the relative positioning of the detection and generation coils.The generation elements used here are elongated, rounded rectangles or ‘racetrack’ shapes, and so the largest magnetic flux density during excitation occurs perpendicular to the long axis of the coil.Detection coils in close proximity exposed to this long axis become saturated during the excitation pulse."This effect can be mitigated by providing a suitable separation between parallel generation and detection coils, but this comes with the complication that the detection coils are aligned to efficiently detect Rayleigh waves, which mask signals arriving from the sample's interior.Instead, the coils can be aligned perpendicular to the generation array elements as shown in Fig. 15, which both exposes much less of the coil to the largest flux densities and so allows for smaller coil separations, and is a configuration that is less favourable for efficient detection of Rayleigh waves.Using the coil arrangement described in Fig. 15, a pulse-echo ultrasound A-scan was recorded on the 32 cm thick steel sample.Although the chosen coil orientation prevents amplifier saturation, the close proximity of the detection coil to the generation coils gives a dead time of 20 μs.The data were therefore processed, firstly by windowing away the generation noise before 20 μs, before fitting the A-scan trace with a 7th order polynomial and subtracting the fit function to de-trend the low frequency generation noise from the signal.High frequency noise was then removed using a Butterworth bandpass filter between 0.1 and 3.5 MHz.The small reflected signal at 53 μs corresponds to a back-scattered longitudinal wave from the defect."The larger pulses observed at 109 μs and 132 μs correspond to a longitudinal reflection off the sample's back wall and a forward-scattered mode-converted shear wave from the defect respectively.This interpretation of the A-scan trace in Fig. 16 has been corroborated with finite element analysis.The results of this experiment suggest that for detection of small defects, the largest indications are provided by forward-scattered mode-converted signals.Although the sample used in this experiment is not as-cast, the signals in Fig. 16 that constitute the defect indication have traveled through 64 cm of steel, and so the prospect of detecting internal defects in a 22.5 cm thick as-cast slab sample remains promising.This work has discussed the development of a compact, low cost, high current four-channel phased array EMAT pulsing system that can drive coil-only generation coils at currents in the range of 1.75 kA.The channels of this pulser system have programmable phase delays with a temporal resolution of 2.5 ns, which allows for focusing and steering of the generated ultrasound beam.Experiments performed on large, coarse grained, as-cast steel slab samples with rough surfaces demonstrate an enhancement of the transmitted signal by a factor of 3.5, and appropriate application of phase delays on three receiving elements can further improve the signal to noise ratio of a transmitted longitudinal signal by an additional factor of 1.9.The EMAT phased array system presented in this work can deliver significant improvements in signal-to-noise ratio over the use of a single EMAT transducer.The ability to achieve high signal-to-noise ratio measurements in attenuative industrial cast steel samples using non-contacting sensors suitable for high-temperature application is a promising first step in the development of a measurement system that can be employed online during the continuous casting of steel for bulk and surface inspection of the slab.The experimental data presented here are supported by finite element calculations, indicating that such numerical simulation is appropriate for further development of the system.Preliminary defect detection experiments have demonstrated that the high-power phased array system can be used to detect artificial void defects that are of similar size to the wavelength, although the highest-amplitude signals observed actually correspond to forward-scattered mode-converted shear waves instead of back-scattered longitudinal waves as is typical in a conventional pulse-echo arrangement.For measurement on as-cast samples, where signal amplitudes are expected to be lower due to poor surface condition and coarse grain structures, a transmission setup for detection of these mode-converted signals should be investigated.Although defect detection using the phased EMAT array has been demonstrated, further studies are required to demonstrate defect detection in as-cast samples, and in particular to demonstrate detection of real casting defects in industrial samples.
A new high-power electromagnetic acoustic transducer (EMAT) solid state pulser system has been developed that is capable of driving up to 4 EMAT coils with programmable phase delays, allowing for focusing and steering of the acoustic field. Each channel is capable of supplying an excitation current of up to 1.75 kA for a pulse with a rise time of 1 μs. Finite element and experimental data are presented which demonstrate a signal enhancement by a factor of 3.5 (compared to a single EMAT coil) when using the system to transmit a longitudinal ultrasound pulse through a 22.5 cm thick as-cast steel slab sample. Further signal enhancement is demonstrated through the use of an array of detection EMATs, and a demonstration of artificial internal defect detection is presented on a thick steel sample. The design of this system is such that it has the potential to be employed at elevated temperatures for diagnostic measurements of steel during the continuous casting process.
405
Use of standardised patients to assess gender differences in quality of tuberculosis care in urban India: a two-city, cross-sectional study
Multiple systematic reviews,1–3 including a review of 56 prevalence surveys from 24 countries, have found that men are more than twice as likely to have active tuberculosis but are considerably less likely than women to be diagnosed and notified to national tuberculosis programmes.In India, this represents a reversal of the usual pattern of disadvantage for women in use of health care.4–7, "Although India's national tuberculosis programme receives 1·9 notifications regarding men for every notification regarding women,8 tuberculosis population prevalence is even higher among men, suggesting that men access tuberculosis care at substantially lower rates than women.1,9",Understanding the sources of this gender imbalance is crucial to identifying and treating the missing millions of patients with tuberculosis globally.10,After an individual develops active tuberculosis, they must traverse a process of care-seeking, diagnosis, linkage to treatment, treatment initiation, and notification to national tuberculosis programmes.11,12,Men might face a disadvantage at any or all of these stages, and identifying the stage of the care cascade that contributes most to the relative undernotification of men could help to focus interventions on the most important gaps in care.Of these stages, our study focuses on understanding gender imbalance in the diagnostic process.This stage is challenging to evaluate and represents a crucial point in care where changes in provider behaviour could improve case detection.13,14,Additionally, gender differences in the quality of care delivered by health-care providers, whether biased against women or men, have important implications for health equity and social justice.Incomplete data on clinical processes in these settings combined with the complexity of the diagnostic process and differences in presentations between men and women makes it difficult to answer a conceptually simple question: when men and women with the same tuberculosis symptoms visit the same health-care providers, do they receive the same quality of care during initial clinical evaluations?,In the absence of medical records,15 we use standardised patients to answer this question.Standardised patients are people recruited from local populations and extensively trained to portray a predetermined and scripted medical condition to health-care providers.In India, standardised patients were first used to understand primary care provider practice in rural and urban India,16,17 and their use has since expanded to a variety of settings and health conditions.Standardised patients are increasingly considered the gold standard for assessing the practice of health-care providers in low-income and middle-income countries.18,19,For tuberculosis, our research team has validated the use of such patients in urban India, and subsequent research has extended the method to China, Kenya, and South Africa to assess quality of tuberculosis care.20–24,Evidence before this study,In India, as in many other countries with a high tuberculosis burden, men substantially outnumber women among notified people in the national tuberculosis programme.However, men are usually underrepresented in these programmes relative to their share of the tuberculosis disease burden, meaning that men are disadvantaged at some point in the care seeking process relative to women.Whether health providers themselves contribute to these differences through delays in diagnosis of men or women is unknown.Reliance on administrative programme data and provider interviews is insufficient for discerning potential gender differentials, because of biases created from potentially different care-seeking patterns across gender.Therefore, how gender differences in notification rates reflect access to or provision of health services and differential quality of tuberculosis care for men and women is uncertain.The standardised, simulated patient method, which is considered the gold standard method to assess provider practice, allows us to rigorously address the quality of tuberculosis care by gender.Added value of this study,The standardised patient method, by ensuring that the case presentations and provider selections of men and women are identical, is used here to determine whether a significantly different response occurs on the provider side when treating otherwise identical case presentations from men and women.Our study used a large, representative sample of private health-care providers in two urban Indian settings with standardised patients to compare the quality of tuberculosis care received by men and women.Because standardised patients were not assigned to providers with a random assignment function from a computer program, but instead by a field team blinded to this study, we did tests that confirmed that standardised patients were assigned by gender as good as randomly.We found that men and women do not receive different health-care experiences in terms of quality of care.Providers were as likely to correctly manage men as they were women, extending to several management decisions among all provider types.Implications of all the available evidence,Systematic differences in quality of care are unlikely to be a cause of the observed under-representation of men in tuberculosis notifications in the private sector in urban India.To understand gender differences in quality of tuberculosis care, we use the same data from our publication on quality of tuberculosis care in the private sector of two Indian cities.23,In that study, we documented wide variation in quality of care across providers that was poorly explained by location and education level, although it was highly responsive to case presentation.23,In this Article, we use the fact that our standardised patients included both men and women to examine systematic differences in care that could be attributed to gender.Individual de-identified interaction data, including data dictionaries, will be available.All variables needed to recreate the results reported in this article will be included, as will the code required to reproduce these results.Data will be available indefinitely upon publication to anyone who wishes to access the data for any purpose.The data and code can be accessed at https://github.com/qutubproject/lancetgh2019.We analysed clinical interactions done by standardised patients portraying four tuberculosis case scenarios at health facilities in the two Indian cities of Mumbai and urban Patna.25,Health facilities were representatively selected in each city using random sampling described previously,23 stratified by qualification and pilot tuberculosis programme engagement status.The health facilities included providers across the range of qualifications available in urban India.On one end of this range were chest specialists and providers with Bachelor of Medicine, Bachelor of Surgery degrees.At the other were providers without an MBBS, including practitioners trained in ayurveda, yoga and naturopathy, unani, siddha, and homoeopathy, as well as registered medical practitioners and those without any formal medical training at all.In each city, stratified sampling was used to randomly oversample providers enrolled in pilot tuberculosis programmes in the private health sector.Data collection for the study was done between Nov 21, 2014, and Aug 21, 2015, in both cities, as part of a larger study on tuberculosis care among private health-care providers in urban India.23,Details of the standardised patient method used in this study are discussed in our previous publications,20–23 including our validation study of the standardised patient method for tuberculosis in India.20,The validation study20 showed that standardised patients were able to recall clinical encounters accurately, that detection of standardised patients as fake patients was low, and that the method posed no major risks to either providers or the standardised patients themselves.Standardised patients portrayed four different standardised cases developed with the support of a technical advisory group comprised of clinicians, public health experts, economists, anthropologists, and experts in both the Standards for TB Care in India and the International Standards of TB Care.26,27,Each standardised patient primarily portrayed only one of the four cases.Several standardised patients worked in both cities, and, in Patna, all standardised patients did some Case 1 interactions, even if this was not their primary presentation.All medicines prescribed or offered to the standardised patients were independently coded and classified.Further details regarding standardised patient recruitment, training, sampling, and data collection are provided in the appendix.This study was granted clearance by the ethics committees at McGill University Health Centre in Montreal, Canada, and the Institute for Socio-Economic Research on Development and Democracy in New Delhi, India.All the standardised patients were hired as field staff and participated in training and refresher training to mitigate any potentially harmful events, such as injections, invasive tests, and consuming any medicines during encounters.As described in our previous publication that uses the same data,23 we sought a waiver of provider informed consent based on the research ethics provisions from the Government of Canada Panel on Research Ethics and a study commissioned by the United States Department of Health and Human Services to assess the ethics of simulated patient studies.28,Supported by our pilot study, which validated the use and ethical implementation of the standardised patient method for tuberculosis,20 both ethics committees approved the waiver, particularly for the following reasons: combining informed consent with the congregation of providers during association meetings and the implementation of tuberculosis interventions during the study period posed threats to the scientific validity of the study objectives as well as to the risk of standardised patient detection, and no more than minimal risk is associated with participation for the standardised patients or the providers, as reported in our validation study.20,All questionnaires and case scripts are available from the authors upon request.Table 2 shows the outcome measures that we used and how they were measured.Our primary outcome was correct case management, which is a predetermined, case-specific outcome benchmarked against the STCI and approved by the technical advisory group convened before standardised patient data collection.26,Secondary outcome measures included the following dimensions of process quality indicators: history questions asked, time spent with the patient, diagnosis provided, medicines given, whether the standardised patient was counselled on treatment, their assessment of whether the environment was private or distracting, whether they liked the doctor, whether they would go to the provider again, and whether the provider seemed knowledgeable and addressed their concerns seriously.We assessed the number and types of medications prescribed to the standardised patients for each of these cases.In addition to assessing the number of different medications prescribed or dispensed in an interaction, we also report the use of broad-spectrum antibiotics, anti-tuberculosis prescriptions, fluoroquinolone antibiotics, and steroids, which can have negative patient-specific or public health consequences.Treatment with fluoroquinolone antibiotics or steroids can mask primary tuberculosis symptoms, leading to a delay in accurate diagnosis.29,30,A primary concern for the validity of the standardised patient method is that the individuals presenting the cases were not actually ill and, therefore, do not automatically present the correct clinical appearance, despite their training.To ensure that standardised patient presentations were convincing, such that detailed questioning and examination of the standardised patient did not lead providers to conclude that the standardised patients were healthy, we examined associations between case management and checklist completion for each standardised patient.We did two sets of comparisons to establish that providers visited by women were, on average, identical to providers visited by men presenting the same standardised patient case scenario.In our study, women and men were recruited as standardised patients in all case scenarios in each city.However, fieldwork conditions precluded the explicit random assignment of standardised patients to providers, and the assignment of standardised patients to providers was done in the field by supervisors who were blinded to the fact that we would do a gender analysis.This made it highly unlikely that standardised patients who were men were assigned to providers with perceived higher or lower quality.Nevertheless, to verify that the assignment was as good as random, we used balance tests and placebo regressions as randomisation tests.Specifically, we did a randomisation test to establish that we could obtain unbiased estimates of the effect of patient gender on quality of care outcomes by using placebo regressions based on standardised patient gender for each case.The rationale for this randomisation test is as follows: if standardised patient gender was correlated with provider quality within each case, then the providers who saw women present Case 1 should differ from those who saw men present Case 1 only within Case 1.By contrast, all other comparisons between those two groups of providers should be uncorrelated with that assignment.Therefore, for each case, we split the sample into the group of providers who saw any man present the case, and the providers who saw any woman present that case.Using these two groups, we ran six placebo regressions comparing how those two groups treated all other potential standardised patient cases.We report additional information regarding power calculations for such an audit study using standardised patients in the appendix.We used ordinary least squares regression and logistic regression to assess differences in clinical care processes and case management across standardised patients by gender.In these specifications, we controlled for differences that arose from the study design.These included the location, the case scenario, and whether the provider had an MBBS qualification.We have shown previously that these differences affect the care that is provided and were components of our study design.23,We complemented ordinary least squares regressions with logistic regressions where appropriate, reporting odds ratios by standardised patient gender for dichotomous outcomes and illustrating appropriate CIs for these estimates.We clustered standard errors at the individual standardised patient level when calculating gender differences, and we used inverse-probability-weighted estimates based on our sampling strategy to arrive at city-representative interpretations of our outcome measures.Therefore, reported estimates correspond to the expected average quality of care outcomes and gender differences if providers were chosen at random by a patient from each city with each city contributing equal weight.23,31,All data analyses were performed with Stata 15.The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report.The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.2602 interactions were done by 24 unannounced standardised patients at 1203 different health-care facilities across two cities.1900 interactions were done by men, who made up 16 of our 24 individual standardised patients.We describe the findings in three parts: whether assignment of standardised patients in the field produced as good as random allocations of women and men for valid inference; how the objective and subjective patient experience varied between women and men; and how provider case management decisions and quality of care varied between women and men.We found that increased clinical scrutiny was associated with higher propensity to treat the patient as though they had tuberculosis, which suggests that providers in general were convinced by the presentation of our standardised patients.A 100% completion rate for the essential history question checklist for each case was associated with a 4% change in the likelihood of correct treatment compared with no questions asked, a 14% increase in the likelihood of giving any medication, an 18% increase in the likelihood of any verbal diagnosis, and a 16% increase in the likelihood of a verbal tuberculosis diagnosis.Although we cannot totally reject provider response to individually varying standardised patient characteristics on all outcomes, the measured height, weight, and age of the standardised patients jointly had no effect on correct management decisions and their inclusion as controls does not systematically affect our main results, ruling out confounding due to gender-correlated physical attributes that might prompt clinical conclusions."We found no differences in the provider's qualification, age, gender, caseload, or the presence of a clinic assistant.Across the 19 comparisons for which we had sufficient statistical power for logistic regression, one was significant at p<0·1, one at p<0·05, and one at p<0·01.The two at p<0·1 and p<0·05 are expected by chance with 19 simultaneous comparisons and are, therefore, rejected as statistically insignificant by Bonferroni multiple-hypothesis, critical value adjustments.The one at p<0·01 occurred in a sample of 35 interactions, and our results are robust to their exclusion.This result is consistent with the assessment that women and men were as good as randomly assigned to providers during this study and reinforces our conclusion that women and men visited equivalent providers during the fieldwork.Consequently, observed differences in clinical interactions between women and men can be attributed to the gender of the standardised patient rather than variation in undetected provider differences."We compared process indicators by standardised patient gender as well as the standardised patients' own subjective experiences of the interactions.We observed one significant difference between the reported experiences of women and men: providers spent significantly less time with men.On all other observed dimensions we found no differences in interactions by standardised patient gender."Comparisons of the standardised patients' subjective assessment of quality, however, show that men were less likely to agree that the provider seemed knowledgeable about the illness or that the provider addressed their worries seriously.Men also rated the providers lower than did women on a subjective scale.Under the assumption that the men did not have different initial perceptions or attitudes towards the providers, the subjective assessment mirrors the shorter interactions with men.Additionally, differences occurred in the types of history questions asked of women and men.Men were more likely to be asked about smoking and drinking habits, whereas women were more likely to be asked about children.Questions related to the tuberculosis diagnosis, however, did not vary systematically across the cases.Only small differences occurred in the frequencies of essential questions like the duration of cough or whether the cough produces sputum.For physical examinations, there were small differences, with women more likely to have had their blood pressure taken.Overall proportions of correct management were 40% for women and 36% for men.We detected no differences in this measure of STCI-compliant management or in any other key quality dimensions of care, such as medication use and laboratory testing, between women and men.All estimated differences in correct case management were statistically insignificant, of small absolute magnitude, and in varying directions by case.Estimates ranged from 3% less correct case management for men relative to women in Case 4, to 5% more in Case 1, highlighting the absence of any systematic difference in correct case management by gender.These differences remained insignificant in all subsamples—MBBS providers, non-MBBS providers, whether the provider was a man or a woman, or within either study city.We also found no differences in any of the individual treatment behaviours composing the correct management index, including the decision to refer the case, whether the provider mentioned a suspicion of tuberculosis to the patient, or the choice among various types of tuberculosis testing.Similarly, we find no qualitatively large or statistically significant differences in the use of unnecessary medications.The overall absence of any large difference in case management outcomes is not an artifact of sample size or estimation methods—the null effects are precisely estimated with narrow confidence intervals and are robust to hypothesis testing using logistic regression."Our study assessed differences by gender in Indian health providers' management of tuberculosis patients with identical clinical profiles, using the gold-standard standardised patient method.We sought to understand whether gender-related differences in quality of care could be contributing to the relative under-diagnosis of men with active tuberculosis in the general population, as has been found in previously published literature.1,9,We demonstrate a general absence of differences in provider behaviour between men and women presenting symptoms of tuberculosis in a high-burden setting with high levels of background gender inequality and large gender differences in both tuberculosis prevalence and notification rates.Three characteristics of the study provide a unique opportunity to assess gender differences in tuberculosis diagnosis and treatment among Indian health-care providers.First, the study is at-scale across two cities with 2602 patient-provider interactions across 1203 health care providers.Second, the study is representative: in each city, we used a comprehensive list of all private health-care providers to randomly sample providers stratified by qualification and reweighted outcomes for representative estimates.Third, the assignment of standardised patient gender among providers was as good as random, although it was not explicitly randomised.Women and men were hired as standardised patients in the study, and the gender of the standardised patient who presented each case to each provider was determined by the field team supervisors.These supervisors were blinded to the gender analysis in the study described here and to provider characteristics other than their name.Although the randomisation was not done explicitly by the researchers, we tested that the assignment of standardised patients to providers in the field resulted in an allocation that was equivalent to explicit randomised assignment and statistically uncorrelated with provider characteristics.These characteristics allowed us to estimate unbiased differences between provider treatment of women and men presenting identical tuberculosis case scenarios across a wide range of patient experience and provider treatment outcomes and to attribute any differences to the gender of the standardised patient.The standardised patient design is not confounded by gendered variation in case presentation across real patients, controls for selective choice of providers by patient gender, eliminates the social desirability response biases inherent in studies based on interviews with patients or providers, and is not susceptible to Hawthorne effects on provider behaviour.9,32,33,Variations by gender in history taking and consultation process seem to be primarily social and quantitatively small.We did not observe systematic differences in provider practice on any quality measures, for any case presentation, or for any level of provider qualification.Additionally, we find no evidence that providers of either gender behave differently when matched with patients of the same gender as themselves.Poor quality of care during the initial clinical evaluation for individuals with suspected tuberculosis, which we have discussed previously, affects women and men equally.20,21,23,34,35, "However, men appear to have received care that involves less provider time and less detailed explanation, and reported significantly worse perceptions of the providers' knowledge, seriousness, and overall satisfaction.We cannot measure in our study whether any of these differences would lead men or women with tuberculosis symptoms to have different experiences of patient satisfaction or stigma on average, potentially shaping future care-seeking behaviour and engagement in tuberculosis care, although this is a distinct possibility.Systematic reviews and multisite studies show that in many settings, including India, men are less likely to complete tuberculosis therapy, more likely to die during treatment, and more likely to experience disease recurrence after completing treatment.3,36,37, "As such, understanding whether gender-related differences in the time spent by providers, in the explanations given to patients, and in patient satisfaction remain consistent during subsequent stages of care, and whether these differences contribute to men's poorer tuberculosis outcomes, is an important area for future research.The strength of the study design and scale of implementation in the two large cities located in different regions of India, lead us to believe that the lack of systematic gender differences in the management of tuberculosis patients in urban India, including appropriate management, referral rates, choice of diagnostic, or use of unnecessary medication is a key and robust finding.Our null findings covered a broad range of process indicators and quality outcomes and have narrow confidence intervals around zero.Our strong balance and randomisation tests suggest that these estimated null results are unlikely to be confounded by observed or unobserved factors.Although we observed gender differences in history taking and examination, as well as differences in time spent with the patient and subjectively reported satisfaction, these differences appeared to have little consequence for case management.Nevertheless, our study had several limitations.First, although the study was a population-weighted assessment of average behaviours for these provider types and cities, it was not necessarily statistically representative of the provider mix that patients face if women and men choose to visit different types of providers on average and might not replicate in other settings.Second, in this study, observed practice only reflected what health-care providers did when they came across a completely unknown or new patient seeking medical care in their first visit to the health care provider.Third, this study only covered private practitioners in two urban areas in India.Additionally, inherent limitations exist in our approach of using standardised patients to assess gender differences.Although each standardised patient visited over 100 facilities on average, the decision to hire 24 individual standardised patients was based on fieldwork logistics and supervision constraints, which has implications for the power and bias in this study.Specifically, if gender and other disadvantages intersect, our study is externally valid only to the extent that these other disadvantages were also represented in our standardised patient selection.We have compared the characteristics of our standardised patients with profiles of tuberculosis patients who visit private clinics in urban India using NFHS-4, which is a nationally representative sample of 601 509 households.38,Our standardised patients had similar age and education profiles, but no standardised patients were from the lowest wealth quintiles, less than primary education and are children or elderly patients.Therefore, our study only relates to the experience of 50–60% of patients who are in the middle and above wealth quantile, have secondary or higher education, and are between 18 and 59 years of age.Standardised patient assignment was not randomised, so the credibility of the differences being due to gender is based on our evidence that the standardised patients were assigned as good as randomly across providers.Additionally, the standardised patient method is designed to provide objective estimates of actual provider behaviour; however, it provides little insight into why we observe gender-related differences in time spent with men.Detailed qualitative and ethnographic studies involving interviews with, and observation of, patients and providers might provide further insights into the social context that shapes these gender disparities in tuberculosis care.39,40,Despite these limitations, the major findings from this study have implications for public health.Our main results suggest that concerns about health-care providers being responsible for gender differences in diagnostic delays are unlikely to be well-founded, though less time and explanation given by providers to men on average could adversely affect outcomes for men in subsequent stages of the care cascade.Our findings should not be taken to imply that neither men nor women experience disease-related stigma or unique challenges in seeking or accessing tuberculosis care, but they do show that they do not face a systematic gender-related difference in care quality from health providers during the initial diagnostic evaluation for tuberculosis.9,41,42,As we discussed previously in multiple contexts regarding the supply-side of health care, the main cause for concern is the low overall level of correct management for all patients, and its lack of strong correlation with provider characteristics like qualifications.21–23,For women and men, the average health-care provider did not ask the essential history questions that would lead to a tuberculosis diagnosis, did not mention tuberculosis suspicion to the patient, and did not order appropriate microbiological tests to diagnose tuberculosis as per the STCI and international recommendations.Instead, more than 80% of interactions resulted in medicine prescriptions, half of which contained unnecessary antibiotics that do not have a role in tuberculosis care and have negative public health consequences.Prescriptions of fluoroquinolone antibiotics and steroids are particularly worrisome, as they can mask tuberculosis symptoms, leading to delays in diagnosis.30,Given the lack of gender-related differences in quality of care delivered by providers, how can the relative under-notification of men be explained?,As described previously, an individual with tuberculosis must traverse three stages to start treatment: care-seeking, diagnostic evaluation, and linkage to treatment.The absence of gender differences in diagnostic evaluation suggests potential barriers for men at the other two stages.Men might be less likely than women to seek care in the first place for their tuberculosis symptoms or to link to tuberculosis treatment after diagnosis in the Indian context.Although systematic reviews have summarised the literature in India on care-seeking, delays in reaching care, and linkage to treatment, they and many of the included studies did not specifically evaluate gender differences for each.11,12,Additionally, few of the studies included in those reviews evaluated patients seeking tuberculosis care in the private sector.As such, research is needed to better understand gender differences across these other stages of the tuberculosis care cascade and to inform policy and programmes."The standardised patient method might be extended to assess gender differences in quality of tuberculosis care in other countries and in India's public sector.In addition to evaluating the diagnostic workup for tuberculosis, the standardised patient method might also have utility for understanding gender differences in quality of care during tuberculosis treatment initiation.The standardised patient method is less useful for understanding gender differences during later stages of the care cascade that involve longitudinal follow-up, given the risk that standardised patients might be detected by providers.For these later stages, quality of care can potentially be measured using cohort studies to understand gender differences in patient outcomes and provider behaviour patterns, especially with regard to linkage to treatment, treatment completion, and recurrence-free survival.Additionally, future research to understand the issue of gender differentials in tuberculosis case notifications might focus on infection risks, the availability and accessibility of high-quality providers, and the decision making of symptomatic individuals.
Background: In India, men are more likely than women to have active tuberculosis but are less likely to be diagnosed and notified to national tuberculosis programmes. We used data from standardised patient visits to assess whether these gender differences occur because of provider practice. Methods: We sent standardised patients (people recruited from local populations and trained to portray a scripted medical condition to health-care providers) to present four tuberculosis case scenarios to private health-care providers in the cities of Mumbai and Patna. Sampling and weighting allowed for city representative interpretation. Because standardised patients were assigned to providers by a field team blinded to this study, we did balance and placebo regression tests to confirm standardised patients were assigned by gender as good as randomly. Then, by use of linear and logistic regression, we assessed correct case management, our primary outcome, and other dimensions of care by standardised patient gender. Findings: Between Nov 21, 2014, and Aug 21, 2015, 2602 clinical interactions at 1203 private facilities were completed by 24 standardised patients (16 men, eight women). We found standardised patients were assigned to providers as good as randomly. We found no differences in correct management by patient gender (odds ratio 1.05; 95% CI 0.76–1.45; p=0.77) and no differences across gender within any case scenario, setting, provider gender, or provider qualification. Interpretation: Systematic differences in quality of care are unlikely to be a cause of the observed under-representation of men in tuberculosis notifications in the private sector in urban India. Funding: Grand Challenges Canada, Bill & Melinda Gates Foundation, World Bank Knowledge for Change Program.
406
Human melioidosis reported by ProMED
Melioidosis is a potentially fatal tropical disease caused by a Gram-negative bacillus called Burkholderia pseudomallei.1–3,This disease is acquired by inhalation, inoculation, or ingestion of microorganisms normally present in water and moist soil,4 and more rarely from other infected individuals.1,Melioidosis is known to be endemic in Southeast Asia and Northern Australia, particularly between latitudes 20° N and 20° S, however sporadic cases have been reported in the Caribbean, Central America, and South America.5,6,The incidence increases at times of heavy rains, and the population groups at greatest risk are farmers and indigenous inhabitants of rural tropical areas.4,7,The disease usually has an incubation period of 1 to 21 days, although long periods of latency may occur, and has a mortality rate approaching 40% in developing countries.8,9,It causes a wide spectrum of clinical manifestations10 that includes asymptomatic seroconversion, pneumonia, atypical forms with neurological involvement, and severe forms causing septicaemia and death.8,11,Alcohol consumption, diabetes mellitus, and chronic kidney disease are risk factors for more severe disease.1,11,B. pseudomallei infection requires prolonged antibiotic therapy to achieve the eradication of the organism and prevent relapses.12,The global burden of melioidosis needs to be determined, but there are limited sources from which to retrieve data regarding the disease, which is not statutorily notifiable in most countries.ProMED is an internet-based reporting system for emerging infectious or toxin-mediated diseases.13–15,It was founded in 1994 and is currently a program of the International Society for Infectious Diseases.13–16,Its mission is to serve global health by the immediate dissemination of information that may lead to early preventive measures, prevent the spread of outbreaks, and provide more accurate disease control.Its sources of information include media reports, official reports, online summaries, reports from local observers, and others.16,The reports are reviewed and edited by expert moderators in the field of infectious diseases and are then submitted to the mail server and published on the website and disseminated to subscribers by e-mail.Being a non-profit, non-governmental system, it is free from governmental constraints.13–16,It currently has over 70 000 subscribers in over 185 countries.16,In this study, the human melioidosis incidents reported by ProMED were reviewed and the reliability of the data retrieved assessed in comparison to published reports.The effectiveness of ProMED as an epidemiological data source was thus evaluated by focusing on melioidosis.The keyword ‘melioidosis’ was used in the ProMED search engine, looking at ProMED-mail, ProMED-ESP, ProMED-FRA, and ProMED-PORT.All the information in the individual reports was reviewed and the data collected using a structured form, including year, country, gender, occupation, the number of infected individuals, and the number of fatal cases.One hundred and twenty-four entries reported between January 1995 and October 2014 were identified.A total of 4630 cases were reported, with 505 cases recorded as being fatal, giving a reported case fatality rate of 11%.Gender was only recorded in 20 cases, with 12 of these being male.Of 17 cases for which the age was reported, the median aged was 45 years.The occupational status was only reported for six, of whom three were farmers, one was a gardener, one was a housewife, and one was a student.Twenty-five cases were reported as imported.Countries of origin were India, Thailand, Bangladesh, Honduras, Gambia, and Madagascar; two cases had travelled to Finland and Spain from unspecified countries in Asia and Africa, respectively.The distribution of the infected and fatal cases by country is presented in Table 1.Most of the cases were reported from Australia, followed by Thailand, Singapore, Vietnam, and Malaysia.The lowest fatality rates were reported from Australia and Malaysia.Amongst cases from Australia, 1381 were from the Northern Territory).It is well known that B. pseudomallei infection is endemic in Southeast Asia and Northern Australia, but as sporadic cases have been reported with a wide geographical distribution, systems such as ProMED provide an opportunity to alert the international community to the occurrence of cases around the world, including the identification of new foci of endemicity.In addition, the occurrence of clusters of cases in an atypical location might alert public health authorities to the possibility of the deliberate release of B. pseudomallei, a tier 1 biothreat agent.17,When compared with the information provided by reports in the subject literature, the most notable finding was that the CFR identifiable from ProMED was low in comparison with the published literature from the developing world;6,8,18,19 however, under-reporting of fatal cases by ProMED is not surprising, as the purpose is to act as an early alert system, and outcomes may not be known at the time of posting.In addition, the data were very incomplete with regards to characteristics such as gender, age, and occupation.In the present study, the highest numbers of reported cases of melioidosis on ProMED were from Australia and Thailand, which is consistent with the known epidemiology of the disease, although the total number of cases diagnosed each year in Thailand is higher than that in Northern Australia due to the greater numbers of exposed individuals.5,6,18–21,Countries like Singapore and Malaysia are also known to be endemic for melioidosis,5 whilst Taiwan, Brazil, and Papua New Guinea have also reported sporadic cases of melioidosis, sometimes related to specific climate events,7,22–24 which is also consistent with the results found in ProMED.Within Australia, melioidosis is most frequently reported from north Queensland and the ‘Top End’ of the Northern Territory.One hundred and seventy-six cases confirmed by culture were reported from northern Queensland during the period from 2000 to 2009, with an overall CFR of 21%, falling to 14% in the years between 2005 and 2009.25,26,A similar CFR was reported from the Northern Territory during the years 1990–2000,26,27 comparable to the 13% mortality amongst cases reported on ProMED.The greater representation of reports from the Northern Territory of Australia on ProMED is likely to be skewed, reflecting the fact that the public health authorities in the Northern Territory have a proactive annual publicity campaign to alert people to the risk of melioidosis at the start of the rainy season, which is then picked up by ProMED.In Singapore, the mortality rate decreased from a CFR of 60% in 1989 to 27% in 1996,28 with 693 cases of melioidosis and 112 deaths reported between 1998 and 2007, equivalent to a CFR of 16.2%,29 which is also close to that found from ProMED.Between 2003 and 2012, 550 cases were reported in Singapore, with rainfall and humidity levels being found to be associated with disease incidence during that period.20,The relatively low CFR in developed countries such as Singapore and Australia is likely to be related to increased awareness amongst medical staff, allowing earlier diagnosis and treatment of early-stage disease,28 optimal antibiotic therapy, and improved supportive management.On the other hand, a prospective cohort study in the northeast of Thailand identified 2243 patients between 1997 and 2006, with a CFR of 42.6%,20 a rate that is much higher than the data from ProMED would suggest.By contrast, although ProMED captured information about some countries in which melioidosis is rarely reported, such as Madagascar, Mauritius, and Brazil, the high CFRs amongst cases from these countries reported on ProMED are likely to be due to the low denominators, with reporting skewed to the recognition of more severe cases, compounded by a lack of awareness amongst medical staff leading to sub-optimal case management.Clearly ProMED cannot always be regarded as an accurate source of information about patient outcome, which is not surprising as that is not its purpose.For Vietnam, the 343 cases included in Table 1 corresponded to historic information reported to ProMED-mail from GIDEON, a system that captures and collates information about infectious diseases from a wide range of sources including the published literature, and related to cases that were reported among American Troops during the Vietnam War.The disease is probably still prevalent in Vietnam, although it is relatively rarely diagnosed among the indigenous population.While this search on ProMED identified no cases reported from Colombia, indigenous melioidosis has recently been recognized there, with 10 cases originating from different parts of the country recently reported in the Spanish literature.30,31,Indeed, Brazil is the only country in the Americas amongst the ProMED reports, despite the fact that sporadic cases have also been reported from other countries in the region, such as Costa Rica and Puerto Rico.9,32,33,It is likely that the low reported incidence of melioidosis in countries of the Americas is associated with a lack of awareness amongst clinical and laboratory staff, leading to a failure to identify the microorganism, thereby underestimating the true incidence of infection.20,29,It is important to ensure that the identification of melioidosis in a new area is widely communicated to healthcare staff in order for appropriate methods to be used to confirm the diagnosis by sampling, laboratory culture, isolation, and identification of B. pseudomallei whenever possible.8,Furthermore, it is notable that more than 95% of ProMED reports were in English and less than 5% in other languages, including Spanish, meaning that those who only subscribe in languages other than English would not have received significant information about melioidosis.This has probably been influenced by the fact that there is not yet enough awareness of melioidosis in Francophone and Hispanophone countries in Latin American, the Caribbean, and Africa.Although ProMED proved to be a useful source of information about melioidosis, it was evident that it gave a misleading impression of the distribution and mortality in some areas, as has been reported previously for other infections.13,We think it would be useful to consider the adoption of standardized templates for reporting to ProMED in order to enhance the standardization of data and provide clear, accurate, and reliable information that could be used to conduct epidemiological analyses in order to establish the status of emerging diseases and assist in their recognition and control.In addition, we think it would be useful for reports in one language to be mirrored to ProMED in other languages, especially when it is the language of the country from which the report originated.In addition, ProMED may be useful as a source of information for travel medicine practice.10,Melioidosis is well described amongst both short- and long-term visitors to endemic areas and so needs to be considered in the evaluation and treatment of patients with sepsis or other febrile illnesses returning from endemic countries,34,35 particularly from Southeast Asia and Northern Australia as has been seen in this study.Melioidosis should be considered in anyone with compatible clinical manifestations, even in the absence of apparent risk of exposure to B. pseudomallei.A greater appreciation of this disease among physicians in non-endemic areas should lead to better management of imported cases, and ProMED could also be a source of epidemiological information when giving pre-travel advice and conducting post-travel consultations.In regions, such as South America, where reports of melioidosis have emerged relatively recently, strategies for surveillance and recognition of melioidosis need to be strengthened.Working groups of national infectious diseases societies could help to increase the awareness among general and infectious diseases practitioners, but might also influence public health authorities by suggesting that melioidosis becomes a notifiable disease in each country.In conclusion, information systems such as ProMED provide insight through daily reports of the occurrence of both individual cases and clusters of infectious diseases.This helps the rapid dissemination of knowledge of the current global situation of emerging infectious diseases and their outcomes in terms of morbidity, disability, and death.However, the data on ProMED are often incomplete and it is important to continue improving and supplementing these systems to allow a more precise knowledge of the worldwide epidemiology of emerging diseases such as melioidosis.Funding: Universidad Tecnologica de Pereira, Pereira, Risaralda, Colombia.Conflict of interest: There is no conflict of interest.Contributions: Study design: AJRM; data collection: KMNP, SCC, AJRM; data analysis: KMNP, SCC, AJRM; writing: all authors.All authors read the final version submitted.
There are limited sources describing the global burden of emerging diseases. A review of human melioidosis reported by ProMED was performed and the reliability of the data retrieved assessed in comparison to published reports. The effectiveness of ProMED was evaluated as a source of epidemiological data by focusing on melioidosis. Methods: Using the keyword 'melioidosis' in the ProMED search engine, all of the information from the reports and collected data was reviewed using a structured form, including the year, country, gender, occupation, number of infected individuals, and number of fatal cases. Results: One hundred and twenty-four entries reported between January 1995 and October 2014 were identified. A total of 4630 cases were reported, with death reported in 505 cases, suggesting a misleadingly low overall case fatality rate (CFR) of 11%. Of 20 cases for which the gender was reported, 12 (60%) were male. Most of the cases were reported from Australia, Thailand, Singapore, Vietnam, and Malaysia, with sporadic reports from other countries. Conclusions: Internet-based reporting systems such as ProMED are useful to gather information and synthesize knowledge on emerging infections. Although certain areas need to be improved, ProMED provided good information about melioidosis.
407
Trends in local newspaper reporting of London cyclist fatalities 1992-2012: The role of the media in shaping the systems dynamics of cycling
The health benefits of cycling are well established, with the physical activity benefits substantially outweighing the injury and air pollution risks in populations where a broad range of age groups cycle.Increasing levels of cycling can also confer additional benefits including reducing urban congestion and greenhouse gas emissions.Over the past twenty years, these benefits have prompted countries and cities across the world to develop pro-cycling policies."This includes the recent publication of an ambitious ‘Mayor's Vision for Cycling’ in London, a city that has already seen rises in cycling.Nevertheless, cycling levels in London and other parts of the UK remain lower than those in many European countries.Transport for London estimates that 23% of journeys could realistically be cycled in the capital, ten times higher than at present.Studies examining barriers to cycling have identified multiple factors that may contribute to lack of uptake, but one of the most common reasons people give for not cycling is perceived risk.It is plausible that the media plays an important role in shaping these safety concerns.The effect of media reporting on public opinion and behaviour is widely appreciated including evidence that media can affect road safety behaviours.McCombs’ agenda-setting theory describes the role of the media in establishing which issues are most prominent in the public agenda.In addition, later research also suggests that second-level agenda-setting may be at work, in defining how these issues are conceived.This opinion-forming role may be particularly important with respect to coverage of cycling fatalities and serious injuries, because such incidents occur comparatively rarely and so are not directly experienced by most people on a regular basis."For this reason, it has been argued that people's overall perception of road traffic risks typically draws on media reporting as well as their personal perceptions of risk in their everyday lives.Moreover, if the media provides memorable coverage of these comparatively rare incidents then the public may overestimate the risk of such events, a phenomenon known to psychologists as the ‘availability heuristic’.This phenomenon has been demonstrated most clearly for public fear of crime.Cyclist deaths and serious injuries share aspects of newsworthiness with crime: they are easy to write about with a simple storyline and convenient access to information; have human interest to ordinary people; and may include enthralling details of violence.In addition, in the context of low levels of cycling, the absolute number of cycling deaths and injuries is low enough to permit each incident to be reported individually.This is a feature shared with aeroplane crashes, another type of risk that is overestimated by the public due to preferential media coverage.In this light, it is noteworthy that a recent media analysis in Australia found that the most common type of cycling-related story involved cyclists being injured, while the second most common involved cyclists being killed.Similar findings have been reported in London, with cycling ‘accidents and dangers’ accounting for 27% of all issues mentioned in cycling-related newspaper articles, a much higher percentage than any other category.Recently, the role of the media in shaping attitudes to cycling has attracted the attention of researchers using a ‘systems dynamics’ perspective to model the dynamic influences on cycling for transport in cities.System dynamics modelling can incorporate the complex interplay of individual, societal, environmental and policy factors shaping behaviour and synthesise these into a qualitative causal theory of positive and negative feedback loops.This dynamic causal loop diagram can then be used as the basis for quantitative simulations to inform policy, and such approaches are increasingly applied across a range of disciplines related to safety and behaviour.In two previous pieces of research, qualitative system dynamics models exploring the determinants of trends in urban cycling have been developed through interviews and workshops with a broad range of policy, community and academic stakeholders.During this development process, many of the relationships in these earlier models have been tested through data identification and simulation.These models propose that as cycling becomes more common in a population there is also likely to be an increase in the absolute number of cycling crashes.If the number of crashes covered by the media increases in tandem, this is likely to decrease public perceptions of cycling safety.This is particularly the case if, as has been argued by elsewhere, public perceptions of road traffic risks are more sensitive to absolute numbers of events than to changes in the underlying statistical risks per unit of travel.This could, in turn, introduce a negative feedback loop, dampening the total increase in cycling levels in a balancing loop.To our knowledge, however, there exists no empirical evidence concerning this particular part of the model, namely the relationship between changes in the prevalence of cycling and changes in media coverage of cycling road traffic crashes.This paper therefore aimed to examine this relationship in London, a city in which cycling levels have almost doubled in the past 20 years; see also Fig. 2)."Within the boundary of our model about urban cycling, our aim was to examine whether this change in the prevalence of cycling was associated with changes in the proportion of cyclist fatalities covered by London's largest local newspaper, and in the amount of coverage per fatality.In order to assess whether any observed changes might simply reflect wider trends in media coverage of road traffic crashes, rather than being specifically associated with increased cycling, we used the coverage of motorcyclist fatalities as a control.We used this as our control because motorcycling is another a minority transport mode that carries a comparatively high risk of injuries, the prevalence of motorcycling in London remained relatively stable, and pilot work indicated that motorcyclist fatalities resembled cyclist fatalities in being fairly readily identified in newspaper reports using keyword searches.We also sought to use a similar approach to compare London to three other English cities, with contrasting recent trajectories in cycling levels.We identified cyclist and motorcyclist fatalities reported to the police, and used this as a denominator for our subsequent examination of coverage rates in the local media.We identified these denominator fatalities using the ‘STATS19’ dataset, which records details of all road traffic injuries that occur on the public highway and that are reported to the police.Comparisons between STATS19 and hospital admission data indicate that a high proportion of injuries are reported to the police in London, and this figure is likely to be particularly high for fatalities.In previous work we have argued that it is plausible that STATS19 covers around 95% of all cycling fatalities in London.In London, we identified all fatalities across a 20-year period in which the casualty was travelling by bicycle or by motorcycle/moped.Data available from STATS19 on each fatality included: the date of the fatality; the location of the crash; the age and sex of the casualty; and the mode of travel of any other vehicle involved in the fatality.We used this last variable to assign a ‘strike mode’ to each fatality, defined as the largest other vehicle involved.This could include ‘no other vehicle’ for cases in which, for example, a motorcyclist lost control of his motorbike.Ethical approval was not required as all data were fully in the public domain.When examining which of these police-reported fatalities were covered in the local media, we focussed on articles in the London Evening Standard.The Evening Standard is the most widely-distributed city-specific daily newspaper in London and its archives were accessed via the electronic journalist database Lexis Library.We searched for cycling articles in the Lexis Library using the Boolean search, and for motorcyclist articles using the search.These searches were developed and refined through a piloting phase which compared the return of different searches.We performed this search of the London Evening Standard archives across the period January 1992–December 2014, and were returned 2041 articles on the cyclist search and 1875 on the motorcyclist search.AR and/or AG then read through these to identify articles that referred to a particular cyclist or motorcyclist fatality in STATS19.Articles were linked to an individual case based on time, date, gender, age, strike mode and area.For a STATS19 fatality to count as having been ‘covered’, the individual did not have to be named but there did have to be enough information to identify the case specifically.Thus, for example, “a woman cyclist and her daughter were killed today on London Bridge” would count as coverage for the cases in question, but “7 women cyclists were killed last year in London” would not.Multiple articles could be assigned to the same individual if their fatality was covered more than once.On a sample of 70 STATS19 fatalities, inter-rater reliability between AR and AG was 97% for our primary outcome, which was whether a particular fatality received any media coverage within two years of the death."In addition to this pre-specified data extraction, our emerging findings prompted us to conduct a post hoc analysis in which we identified all articles in which the headline described a call or campaign for cycle safety improvements or criticised the safety of existing infrastructure.All headlines were assessed independently by both AG and RA.In the course of searching for the 1218 London fatalities recorded in STATS19, we found newspaper reports of a further 6 fatalities which were not in STATS19, despite appearing eligible for inclusion in that database.We did not include these 6 fatalities in our analyses; sensitivity analyses indicated that this did not materially affect any of our findings.We replicated the approach described in Sections 2.1 and 2.2 across three other English cities, chosen purposively to provide a mixture of contrasting cycling trajectories."The first, Birmingham, is Britain's second largest city and has experienced low levels of cycling throughout the study period.The second, Bristol, is the largest British city outside of London to have seen a substantial increase in cycling over the past 20 years."The third, Cambridge, is Britain's leading cycling city and has seen sustained high levels of cycling.In each of these three cities we again extracted cyclist and motorcyclist fatality data from STATS19, and identified the leading daily newspapers for each city.The Birmingham and Bristol newspapers were covered by the Lexis Library up to December 2014, but coverage only started in the late 1990s.We therefore only searched for fatalities occurring during this same time period.The Cambridge newspaper was not covered in Lexis Library at all, and we therefore instead searched manually in this newspaper using microfilm archives in the British Library.These searches involved scanning the newspaper from cover to cover for all three-month periods subsequent to any fatality after January 1992.All articles identified this way contained the words used in our electronic search terms, i.e. would have been returned as ‘hits’ by an electronic search.The pre-specified primary outcome for our study population of STATS19 fatalities was the proportion of fatalities receiving any media coverage within two years of fatality.For fatalities receiving any media coverage, we were also interested in the amount of coverage for each fatality.The secondary outcome was therefore the mean number of articles reported within two years per fatality.The upper time limit of two years was chosen to reflect the fact that a longer follow-up period was not available for the most recent fatalities.We had an insufficient number of time points to undertake formal time series analysis, modelling how changes in cycling levels impacted changes in media coverage over time.We therefore instead investigated how our primary and secondary outcomes varied according to the year in which the fatality took place, examining whether media reporting of cyclist fatalities changed over time in line with the increase in total cycling levels.These analyses were stratified by city and by travel mode, and started with the calculation of raw proportions and means.We then proceeded to fit Poisson regression models with robust standard errors, in order to calculate risk ratios.We also used these Poisson models to test for interactions between year of fatality and travel mode–that is, whether trends differed over time between cyclist and motorcyclist fatalities.In the case of our secondary outcome, similar tests for interaction were performed after dichotomising the number of articles reported into 1–2 articles versus ≥3 articles.In London, we additionally used multivariable Poisson regression models to assess whether location, age, sex, and strike mode were predictors of whether a given fatality received any media coverage.STATS19 had 100% complete data on all these characteristics except age, which was 98% complete.These missing data were imputed using multiple imputation by chained equations under an assumption of missing at random.All analyses were conducted using Stata 13.1.Across the study period, the annual number of cyclist fatalities in London remained relatively stable, at around 15 per year.Given that the estimated daily number of cycle trips almost doubled, this implies a reduced injury rate per cyclist over the time period.Although the total number of cyclist fatalities was fairly stable, the proportion covered in the London Evening Standard increased markedly, from 6% in 1992–1994 to 78% in 2010–2012."This translated into an adjusted risk ratio of 13 – i.e. after adjusting for the fatality's gender, age, the region of London and the strike mode, the likelihood of receiving any media coverage was around 13 times higher in 2010–2012 than in 1992–1994.This change was highly significant, and largely occurred after 2003.Thus the increase in media propensity to report cycling fatalities coincided with the period in which cycling in London also increased most rapidly.There was likewise strong evidence that the mean number of articles per fatality increased over the study period, even when we only calculated this mean in relation to those fatalities that received at least some coverage.This latter effect was partly driven by eight individuals killed between 2004 and 2011 who were each covered in 10–31 separate articles, with much of this sustained coverage being accounted for by articles calling for increased road safety.Across the years studied, the number of cyclist-fatality headlines that featured calls for cycle safety improvements, or criticism of current infrastructure, rose from 1/14 in 1992–2003 to 30/84 in 2004–2012.The proportion of motorcyclist fatalities covered in the Evening Standard also varied across the study period, being higher in 2004–2009 than in other years.The magnitude of this variation was smaller than that seen for cyclists, however, and the increase in the mid-2000s was not sustained.There was little change across the time period studied in the number of articles reported per motorcyclist fatality.For both of these two outcomes, tests for interaction between year and fatality mode provided strong evidence that the pattern seen for cyclists differed from that seen for motorcyclists.This provides some evidence that the observed pattern for cycling cannot be attributed to changes across the study period in the reporting of road traffic crashes more generally.Table 2 shows the association between other individual and crash-related factors and whether a cyclist or motorcyclist fatality received any coverage.There was no evidence that any of these predictors of coverage differed between cyclists and motorcyclists, so we pooled data from cyclists and motorcyclists in the regression analyses in order to increase power.In minimally-adjusted analyses, there was evidence that fatalities among females and among younger individuals were more likely to be covered, as were fatalities occurring in more central parts of London.There was also a trend towards higher coverage of fatalities where the strike mode was a heavy goods vehicle.In mutually-adjusted analyses these effects were attenuated but marginally significant effects remained for gender, age and area of London.Table 2 shows the association between other individual and crash-related factors and whether a cyclist or motorcyclist fatality received any coverage.There was no evidence that any of these predictors of coverage differed between cyclists and motorcyclists, so we pooled data from cyclists and motorcyclists in the regression analyses in order to increase power.In minimally-adjusted analyses, there was evidence that fatalities among females and among younger individuals were more likely to be covered, as were fatalities occurring in more central parts of London.There was also a trend towards higher coverage of fatalities where the strike mode was a heavy goods vehicle.In mutually-adjusted analyses these effects were attenuated but marginally significant effects remained for gender, age and area of London.In all three non-London cities, the level of coverage of both cyclist and motorcyclist fatalities was uniformly high across the whole of the study period, including 100% coverage of cyclist fatalities in Bristol and Cambridge.This uniformly high coverage meant that there was limited scope for any increase in coverage and this, in combination with the smaller numbers of fatalities, meant that there was insufficient power to conduct meaningful comparisons across time.The number of articles reported per fatality covered in the press was likewise high in these other settings relative to London.Overall, therefore, the three smaller UK cities had a pattern of newspaper coverage that was comparable to that seen in the most recent years for cycling in London.In London, the number of cycling trips has doubled in the last 20 years.Over the same period, the number of cyclist fatalities covered in the largest London newspaper has increased ten-fold, even though the total number of cyclist fatalities has remained stable.The timing of this increase in media coverage of cyclist fatalities closely followed the timing of the fastest increase in the prevalence of cycling.The observation that the coverage of motorcyclist fatalities remained low throughout the study period provides some evidence that the changes observed for cycling did not simply reflect wider shifts in how the newspaper covered road traffic crashes.It therefore seems plausible that the change in coverage for cyclist fatalities was instead specifically related to cycling becoming more ‘newsworthy’ as cycling became more common and as promoting cycling became an increasingly prominent transport goal for London policy-makers.Finally, our analyses also suggest that this increased frequency of covering cycling fatalities may have been accompanied by an increase in the number of articles campaigning for improved conditions for cyclists.These findings suggest a need to refine and extend the dynamic causal theory that motivated this research.Our starting point for this research was the balancing loop B1.This loop hypothesised that more cyclists would lead to more cycling fatalities and therefore to more media reports of those fatalities, and that this in turn would reduce public perceptions of safety and so dampen cycling uptake.In London, contrary to the prediction of this feedback loop, we found that coverage of motorcyclist fatalities did not increase despite an increase in the absolute number of deaths.We also found that the coverage of cyclist fatalities did increase in London without any increase in the number of cyclists killed.By contrast, the three smaller cities had almost universal media coverage of cycling fatalities, and so coverage changed in line with the total number of fatalities as proposed by the balancing loop B1.Taken together, we consider that the loop B1 may be relevant in some settings, but our findings provide some evidence that this loop is insufficient to capture fully the complex and context-specific role of the media in shaping cycling trends.At least in London, we therefore propose that two further feedback loops are likely to be active.Firstly, a rapid increase in the uptake of cycling may lead to an increase in media interest even without an accompanying rise in cyclist fatalities.This leads to an increased likelihood of reporting fatalities in the media, again plausibly reducing public perception of cycling safety and acting as a hidden limit to the growth of cycling uptake.On a more positive note, if part of this increased reporting is tied to campaigns to improve cycling safety, then the media may in the longer term also play a role in influencing safety investment as part of a self-perpetuating reinforcing loop.Previous qualitative work in London has suggested a shift in recent years towards more media advocacy on behalf of cyclists, and our findings also provide preliminary evidence of an increase in media campaigning for ‘safe cycling’ across the past two decades.Thus while the increased media coverage of cycling fatalities may have discouraged individuals from cycling, it may have influenced policy-maker behaviour in a different direction towards being more supportive of cycling.Further research could usefully investigate the ‘audience effect’ aspects of these loops, i.e. how individuals and policy-makers respectively understand and react to media coverage.If media stories about fatalities are accompanied by repeated calls for greater government investment in cycling facilities, and if building these facilities successfully improves both objective and perceived safety, then this could lead to a more rapid uptake of cycling and further media interest.There would, however, be a delay between media campaigning and investment, hence the indication in Fig. 4 that this feedback loop would operate with a time lag.Moreover, this strategy may be undermined if media safety campaigns involve reigniting stories about fatalities.It could also be undermined if media campaigns focused primarily upon the need for individuals to cycle more cautiously, as opposed to calling for safety improvements to the wider cycling environment.We intend to explore this last point further in future qualitative work, which will provide an in-depth analysis of the content of the media fatality reports, and how this content may have changed over time.In London, our analysis suggests that the balancing loops B2 and B3 are probably at present the strongest of these feedbacks.The results from other smaller cities, however, hint this may not be the case in all settings.In the data we examined, these cities appear already to be at or near ‘saturation point’ such that every cycling fatality is reported.This may well reflect the fact that absolute numbers of fatalities are smaller in these cities, and therefore any given fatality more newsworthy.It is interesting to note that the saturation apparent in our data applies across all three cities, even though their low numbers of fatalities arise from somewhat different processes.In the context of such saturation, the balancing loop B1 is likely to be most active, in that any increase or decrease in the absolute numbers of fatalities would be expected to translate into increases or decreases in levels of media coverage.There would also be the potential for reinforcing loop R1 to operate in these cities, depending on the predominant media discourse.The need for policies that proactively improve cycling safety is reinforced by the potential for media coverage of cycling deaths to undermine policy objectives to increase the number of people cycling.In large cities, where increased cycling levels may lead to dramatic increases in coverage per fatality, there is a particularly urgent need to reduce cycling risks to the much lower levels seen in higher-cycling contexts.Moreover, the changes in media reporting practices observed in our data also highlight that objective cycling risk is not the only influence on subjective risk perceptions.Such influences on risk perceptions are therefore also worthy of independent attention.By offering a more nuanced understanding of the complex, dynamic relationships between cycling numbers, fatalities and responses by the media, we hope to assist in creating more effective policies to meet the objectives of increasing both cycling levels and cycling safety.We also believe that similar approaches could usefully be applied to risk perceptions stemming from personal experience rather than from the media.For example, previous research has indicated the importance of experienced near-miss incidents to perceived safety, so action to improve driver behaviour might be another means of mitigating the loops identified in Fig. 4.Finally, although our primary focus was on changes in the reporting of fatalities over time, we also found some evidence that the London newspaper we examined was more likely to cover cyclist or motorcyclist fatalities in more central parts of London."This might be because the newspaper's readers are more likely to live in, work in or visit these central areas, thus illustrating the frequent role that ‘proximity’ plays when determining what is newsworthy. "Fatalities among women were also more likely to be covered, which may be related to women being perceived as more ‘vulnerable’ as road users than men, despite men's generally higher injury risks.It is interesting to speculate how this might differentially impact on take-up."Cycling in London continues to be male-dominated and it is possible that women may be disproportionately influenced by hearing about other women's deaths while cycling.Thus the balancing loop identified here could perhaps also operate to reinforce the existing inequalities in take-up.Strengths of our study include our innovative attempt to link individual police records of road traffic crashes to media coverage of those fatalities; our examination of a time period spanning two decades; and our use of motorcyclist fatalities as a control group for cyclists.One limitation is that we only examined one newspaper in each setting.Although the London Evening Standard is the oldest and largest daily newspaper in the London region, our findings may not generalise to its competitors or to alternative sources.In addition, although the format of the Evening Standard has remained fairly stable over time, news-reading habits have changed considerably in the past 20 years.These changes in news-reading habits may mean that the total exposure of the public to coverage of cycling fatalities has increased to a greater or lesser degree than the increase documented in the Evening Standard.In addition, our study focussed only on the UK, and it is unclear how far the findings generalise to other parts of the world.It would therefore be a useful extension in future research to make comparisons of different media sources, as well as between international cities that have experienced substantial increases in cycling such as New York.A further limitation of our study was its exclusive focus on newspaper reports of fatalities.One useful line of future research would be to assess how far our findings generalise to coverage of serious injuries among cyclists.This would be a particularly interesting extension in the smaller English cities examined, in order to investigate whether coverage of serious injuries is below ‘saturation point’ and may therefore have had scope to change over time.Another useful extension would be to set newspaper reports of fatalities in the context of all newspaper reporting on cycling over the time period.One previous study which did this using Melbourne and Sydney newspapers found that a high number of cyclist fatality and injury stories were observed in both 1998–1999 and 2008–2009, but that overall the ratio of ‘negative’ to ‘positive’ cycling stories decreased over the decade examined.If such effects have also been operating in London, it is possible that these counter-balance to some extent the balancing feedback loops shown in Fig. 4.Another limitation of our study is that it only provides empirical evidence regarding the first part of the systems model shown in Fig. 4, i.e. the link between cycling levels and media reporting.In other areas, there appears to be well-developed evidence that the media has complex agenda-setting and framing effects on public opinion, as well as contributing in this way to setting a wider civic policy agenda.This wider role was echoed during our previous qualitative research, during which a wide range of stakeholders hypothesised a number of relationships between media coverage and public perceptions of safety.It would be highly valuable in future research to complement the evidence presented in this paper with further empirical evidence regarding these other hypothesised relationships, and therefore by to extend further our understanding of the system dynamics of urban cycling.It would likewise be valuable to examine more closely the direct and indirect impacts of media coverage of cycling deaths on policy decisions and investment in cycling safety infrastructure and campaigns.Finally, the present study was limited in largely focussing on ‘whether’ a fatality was covered in local newspapers, with less attention given to the content of that coverage.In other words, we largely focussed on the first role of the media discussed by McCombs’ agenda-setting theory of the media rather than the second role.We intend to address this second role in a future qualitative paper which will examine how the Evening Standard covers cyclist fatalities; whether this has changed over time; and how this compares to coverage of motorcyclist fatalities or coverage in the three other English cities.This further qualitative research will also examine more fully the extent to which media coverage of cyclist fatalities may be instrumental to wider media campaigns to improve cycling infrastructure or cycling policy.Bringing the quantitative and qualitative research together, we hope to provide further insights into how local media may facilitate or hinder efforts to promote cycling and to improve cycling safety, including further refining our causal theory.
Background Successfully increasing cycling across a broad range of the population would confer important health benefits, but many potential cyclists are deterred by fears about traffic danger. Media coverage of road traffic crashes may reinforce this perception. As part of a wider effort to model the system dynamics of urban cycling, in this paper we examined how media coverage of cyclist fatalities in London changed across a period when the prevalence of cycling doubled. We compared this with changes in the coverage of motorcyclist fatalities as a control group. Methods Police records of traffic crashes (STATS19) were used to identify all cyclist and motorcyclist fatalities in London between 1992 and 2012. We searched electronic archives of London's largest local newspaper to identify relevant articles (January 1992-April 2014), and sought to identify which police-reported fatalities received any media coverage. We repeated this in three smaller English cities. Results Across the period when cycling trips doubled in London, the proportion of fatalities covered in the local media increased from 6% in 1992-1994 to 75% in 2010-2012. By contrast, the coverage of motorcyclist fatalities remained low (4% in 1992-1994 versus 5% in 2010-2012; p = 0.007 for interaction between mode and time period). Comparisons with other English cities suggested that the changes observed in London might not occur in smaller cities with lower absolute numbers of crashes, as in these settings fatalities are almost always covered regardless of mode share (79-100% coverage for both cyclist and motorcyclist fatalities). Conclusion In large cities, an increase in the popularity (and therefore 'newsworthiness') of cycling may increase the propensity of the media to cover cyclist fatalities. This has the potential to give the public the impression that cycling has become more dangerous, and thereby initiate a negative feedback loop that dampens down further increases in cycling. Understanding these complex roles of the media in shaping cycling trends may help identify effective policy levers to achieve sustained growth in cycling.
408
A social cost benefit analysis of grid-scale electrical energy storage projects: A case study
Electrical energy storage can support the transition toward a low-carbon economy by helping to integrate higher levels of variable renewable resources, by allowing for a more resilient, reliable, and flexible electricity grid and promoting greater production of energy where it is consumed, among others .In addition to decarbonisation, EES promotes lower generation costs by increasing the utilisation of installed resources and encouraging greater penetration rates of lower cost, carbon-free resources .EES plays an important role supporting distributed generation and distribution planning processes for future power systems.Different jurisdictions are evaluating the value of EES for planning purposes related to the next generation of electric distribution utilities .The global electrical energy storage market is expanding rapidly with over 50 GW expected by 2026 of utility-connected energy storage and distributed energy storage systems.1,In the United States alone, deployment is expected to be over 35 GW by 2025 .This upward trend is mainly explained by favourable policy environments and the declining cost of EES, especially batteries .Market structures that support its deployment are also observed .The declining costs of EES combined with cost optimisation models show an increase in the number of applications and use-cases of storage technologies .There are different types of EES technologies with specific technical characteristics, that make them more or less suitable for a different range of EES applications .Depending on the market, EES technologies and their applications can be subject to different regulatory context and policies .Even though there are a large number of EES technologies, not all of them are exposed to the same level of development.This reflects the different size of capital and/or operational costs among them.In fact many of them are still in a research or development stage.While pumped hydro storage is among the most mature and cheapest storage technologies for short-term and long-term storage , battery storage is the one with the most commercial interest and growth potential .EES can be used for multiple applications and can therefore generate different revenues streams whose value depends on the type of technology2 and the place where the EES facility is located, at generation sites, on the transmission or distribution grid or behind the end consumer’s meter .Different studies have evaluated the cost and benefits of EES however few of them take into account the multi-product nature in agreement with the diverse EES revenues streams and the uncertainty component.Idlbi et al. , estimate the net benefits of battery storage systems – BSS) in the provision of reactive power versus other options such as conventional reinforcement.They suggest that BBS for voltage compliance is more economically viable than the grid reinforcement option but less viable than power curtailment.However this viability can increase if we take into account the multi-product nature of BSS, which is not limited to reactive power support only, and the fact that battery costs show a downward trend, which is making EES more competitive.Gunter and Marinopoulus , estimate and evaluate the contribution of grid connected EES to frequency regulation and peak limiting in the eastern United States and California.Results from their cost benefit analysis and sensitivity analysis suggest that EES deployment is economically viable even with market structures less beneficial than the current ones.However, the large profitability of EES in California may be explained by the subsidies applied to the development of EES.Shcherbakova et al. evaluate the economics of two different battery energy storage technologies for energy arbitrage in the South Korean electricity market.They find that none of these storage technology is economically viable based on the current market conditions.They also recognise that the inclusion of other potential financial benefits in ancillary services and other applications may reverse this result.Wade et al. evaluate benefits of battery storage connected to the distribution network.The benefits of the storage system are evaluated based on the response of multiple events requiring voltage control and power flow management.The authors find that the introduction of EES embedded in the distribution network has a positive impact on the tasks associated to these two variables.Other studies concentrate on the analysis of the costs and benefits of EES and renewable energy integration using specific optimisation models.Sardi et al. evaluate the cost and benefits of connecting community energy storage in the distribution system with solar PV generation.A comprehensive set of EES benefits and some specific costs were identified.The authors suggest that the proposed strategy helps to find the optimal location of the EES that maximises the total net present value.Han et al. , propose an optimisation model for integrating grid-connected micro-grids with solar PV and EES.A cost benefit analysis is used in order to establish a generation planning model of a micro-grid that maximises the net profits.Among the studies that are more related to this study are Perez et al. , Newbery and SNS .These studies are also focused on the evaluation of net benefits of a particular case study.However, our paper is the one that includes the most comprehensive list of EES benefits and costs.This paper in comparison with others, incorporates risks and uncertainty of net benefits, costs and battery lifespan.In addition, rather than modelling EES from a business case perspective or in a future-state of the power system dominated by renewables and distributed generation, this study uniquely evaluates a specific energy storage project from society’s perspective in order to cost-effectively guide investment in EES projects and discuss policy implications and electricity market reforms for achieving a low carbon network.Accurately valuing EES projects helps inform system operators, distribution network operators, generators, suppliers, regulators, and policy-makers to make decisions to efficiently allocate resources to modernize the electricity grid.This paper seeks to examine the empirical trials from the Smarter Network Storage project through the lens of a social cost benefit analysis to evaluate publicly sanctioned investments in grid-scale EES in Great Britain.The social cost benefit analysis framework answers the fundamental question of whether or not society is better off after making the investment in grid-scale EES.The uncertain benefit and cost streams are evaluated through a Monte Carlo simulation and then arranged through a discounted cash flow to provide a net present social value of the investment.SNS represents the first commercially-deployed, multi-purpose grid-scale battery in Great Britain, and it has been selected as the case study for this research because its empirical results from years of trials are well documented.The paper is organised in the following manner.Section two provides the background and a brief description of our case study: the Smarter Network Storage project.Section three discusses the Cost Benefit Analysis method.Section four identifies and quantifies the social costs.Section five identifies and estimates the different social benefits and related revenues streams.Section six discusses the results by combining the analysis of the costs and benefits and the implications of the net present value results.Section seven lays out the conclusion and offers insights into policy recommendations for enhancing the value of EES through electricity market reforms.In order to facilitate the low carbon transition of the power system, the Office of Gas and Electricity Markets Authority established the Low Carbon Network Fund, a £100 million per annum fund – which ran for 5 years from April 2010 to March 2015 - to support clean energy demonstration projects sponsored by Distribution Network Operators.4,One such DNO, UK Power Networks established the Smarter Network Storage project in 2013 to showcase how EES could be used as an alternative to traditional network reinforcements, enable future growth of distributed energy resources, and a low carbon electricity system.The Smarter Network Storage project deployed a lithium-ion battery with 6 megawatts and 10 megawatt-hours of power and energy, respectively, at the Leighton Buzzard Primary substation to offset the need for an additional sub-transmission line to alleviate capacity constraints.Electricity supply in Great Britain is composed of four key sectors: generation, transmission, distribution, and suppliers.Within this electricity supply chain is the Leighton Buzzard Primary substation, an asset owned by UKPN and a bottleneck for providing reliable power to customers in the distribution network.Leighton Buzzard is a town located in Bedfordshire, England and has a population of approximately 37,000 people.The current Leighton Buzzard primary substation design includes a 33/11 kV substation and two 33 kV circuits, each with a rated thermal capacity of 35.4 MVA.Due to cold snaps in the winter, UKPN experiences its peak demand for electricity in the winter insofar that the local peak demand surpasses the 35.4 MVA capacity limit.Fig. 1 , p. 15) illustrates this capacity problem dating back to December 2010.To alleviate the current capacity constraints, the Leighton Buzzard substation is able to re-route 2 MVA of electricity supply.This transfer capacity from neighbouring sections of the distribution network has successfully resolved the peak demand problem in Leighton Buzzard in the short-term; however, it is costly and does not avert the larger issue of growing peak demand over the long-term.Thus, UKPN sought to investigate two potential long-term solutions to the capacity constraint.The first option is the conventional approach that DNOs like UKPN would historically choose using a least-regret investment criteria.This option includes building new distribution infrastructure to support the growing electricity needs: an additional 33 kV circuit connecting to the 132/33 kV Sundon Grid and a third 38 MVA transformer located at the Leighton Buzzard substation.This reinforcement would provide an additional 35.4MVA in firm capacity at Leighton Buzzard, which is significantly above predicted capacity requirements for the medium-to-long term .The second option is often referred to as a Non-Wires Alternative investment because it need not require the expansion of the wires on the electricity grid.Rather, UKPN could build an EES device at the site of the substation to alleviate the capacity constraints.The EES would discharge electricity during times of peak demand to alleviate stress on the electricity grid, and then charge during times of low demand.The EES would be configured and dispatched in a manner to offset the need for the conventional upgrade.5,Fig. 2 , p. 15) compares the two options for network reinforcement.On the one hand, UKPN could build a third circuit between the 132/33 kV Sundon grid and the 33/11 kV Leighton Buzzard substation.On the other hand, UKPN could build an EES device to offset the need for the conventional upgrade.Using financing from the Low Carbon Network Fund, UKPN opted to choose the latter solution and build the EES device in 2013, called the Smarter Network Storage project.The EES device for the Smarter Network Storage project was a lithium-ion battery of the size 6 MW/7.5MVA/10 MWh.6,In addition to deferring the upgrade for capacity, the Smarter Network Storage project sought to realise additional benefits from building a battery by participating in the wholesale power markets and providing location-specific and system-wide services.Due to the unbundling regulations in the UK for DNOs, the Smarter Network Storage project is owned by UKPN7 but it is operated by Smartest Energy and its aggregator is Kiwi Power.Since 2013, UKPN has recorded empirical results from testing and trialling the battery, as it performs in reality while interconnected to the grid.Using the empirical trial runs, this paper seeks to evaluate the decision to invest in EES from a societal perspective using social cost benefit analysis.The social cost benefit analysis framework is an effective tool for evaluating the publicly sponsored investment in Smarter Network Storage.A full social cost benefit analysis should be able to address the impact of an EES project on economic efficiency and equity .Galal et al. identify three main agents in society: consumers, private producers and government.When applying their framework to the electricity supply chain, the agents in society include OFGEM, National Grid, UKPN, consumers, suppliers, and developers.Within electricity markets, deploying a battery would provide different revenue streams for each agent, hence requiring a different business model subject to the individual agent’s value proposition.However, the social cost benefit analysis takes a more holistic perspective looking across the various agents of the energy supply chain, incorporating market-based value streams and non-market shadow prices.This tailored social cost benefit analysis framework is illustrated in Eqs. and.The use of the sigma notation is critical to the social cost benefit analysis because the time dimension for the benefit and cost streams extend through the useful life of the battery project.The useful life is defined as the period between the beginning of construction to the end of decommissioning the project.This enables the coupling of a social cost benefit analysis with a useful lifecycle assessment to evaluate the techno-economic performance of the battery.In addition to the useful lifecycle assessment, the social cost benefit analysis will require the use of discounted annual cash flows to determine the net present value of both the benefits and costs.Moreover, note the removal of all transfer payments from one agent to another within society.Transfer payments are the exchange of financial claims in which there is no net value generated to society.The need to remove transfer payments induces a more critical examination of project cash flows which rely on taxation, subsidies, duties, and improvements in the cost of financing because these mechanisms may merely involve the transfer of resources from one agent to another within society.In Section 4, certain benefit streams are also omitted from the analysis because they can be classified as transfer payments.For the useful lifecycle assessment of the social cost and benefit streams, the discount rate is determined by the pre-tax weighted average cost of capital.8,The pre-tax WACC removes the impact of taxation from the financial analysis and values the risk and uncertainty associated with the EES project.SNS established the cost of equity at 7.2%, the cost of debt at 3.8%, and the debt-equity ratio at 62%; therefore, the pre-tax WACC in real £ terms is 5.09%.For this analysis, the discount rate was varied between 3.0% and 7.2%.All values in this report are presented as £2013, unless otherwise noted.The social cost benefit analysis framework in this study is adapted from Galal et al. .It includes the use of a counterfactual such that the calculated NPV guides investments in EES relative to other solutions.Using the Kaldor-Hicks compensation principle, the investment in the Smarter Network Storage project would be deemed worthwhile to society if NPV > 0.Such a result would warrant that the investment was net-beneficial to society .On the other hand, if NPV < 0, this would signify that the investment was net-costly to society.Incorporating risk and uncertainty enhances the project appraisal and policy assessment because the social benefits and costs are not deterministic values but rather subject to variation under different future scenarios.In the case of evaluating the Smarter Network Storage project, the variables are stochastic and vary significantly due to uncertainty in future reforms of the electricity market and policy settings, etc.A Monte Carlo simulation is a computer-based technique that uses statistical sampling and probability distribution functions to simulate the effects of uncertain variables .Monte Carlo simulations should be paired with a social cost benefit analysis because it is meaningful to attach statistical distributions to model the uncertainty.The Monte Carlo simulation is executed for 10,000 multi-dimensional trials and applies a normal distribution: Normal Distribution for Assessing Social Benefits) to each of the eight benefit streams, to each of the six costs elements, and incorporates the potential variation in the lifespan of the battery and the discount rate.Typically, the social costs vary by type of EES technology, the power and energy capacity, and the use-case.This section establishes a clear and consistent framework for capturing all the useful lifecycle costs of EES and then applies it to the Smarter Network Storage project.The costs are bifurcated into capital expenditures and operating expenditures.Our use of the lifecycle cost analysis captures capital and operating maintenance costs of storage systems.Some maintenance costs are a function of the cycling of storage and are embedded into the Monte Carlo cost and degradation simulations.The cost of financing the battery is removed from the analysis because it is not a social cost.The capital expenditures of a lithium ion battery pertain to the battery cells, the battery pack, the balance of system, the soft costs, and the engineering, procurement and construction.The following discussion parses out the intricacies of the costs of EES devices and normalizes the costs to the size of 6 MW/10 MWh for the Smarter Network Storage project.The battery cells and packs are at the core of the battery energy storage system.The Smarter Network storage project acknowledges that the main cost driver of the cells and packs is the power-to-energy ratio of the storage device.Therefore, costs of these components are often reported as £/kWh.It is estimated that an identical battery with the size of 6 MW/6 MWh would have 60–65% of the total capital expenditure of the 6 MW/10 MWh battery .The Smarter Network Storage project includes 192 Samsung SDI lithium-manganese battery cells connected in series per pack.These packs were then placed into 264 trays per rack, with 22 racks connected to each 500 kW of the storage management system.Each switchgear includes 12 trays, composed of 22 battery string, and the entire battery system is accompanied with an 11 kV switch room.For a 6 MW battery, the result is a total of 50,688 Samsung SDI battery cells that are integrated into battery packs.Additional battery configurations and technical specifications are provided by the project developers and vendors .The balance of system for the BESS includes just the hardware costs for the equipment to support the functionality of the battery cells and packs.The balance of system costs include the rectifier and the bi-directional inverter because the battery operates in direct current but charges from and discharges to the grid, which operates on alternating current.The balance of system costs include power conversion systems, enclosures, containerization, safety equipment, system packaging, and any other system operating technologies.The balance of system costs are often reported in £/kW because the equipment is designed to support the maximum power output of the battery.The soft costs include the customer acquisition, customer analytics, industry education, permitting fees, supply chain costs, and installation labour.As evidenced by other more mature technologies such as photovoltaics, soft costs can decline rapidly as standardization reduces the permitting fees, labour-hours, and supply chain costs.As the EES industry matures, the soft costs will likely follow an asymptotic cost decline curve.The engineering, procurement, and constructions costs largely included civil engineering, procurement of land for use, and logistics for construction of the site.The need to construct an entire building to house the BESS became the driver for the EPC costs.For the Smarter Network Storage project, a building equivalent to the size of three tennis courts was constructed to safely and securely operate the 240 tonnes of equipment.The BESS has annual operating expenditures that include system upkeep and electricity purchasing.Upkeep costs include inspection & maintenance, spare parts, facilities costs, insurance, management & administration, control systems, and risk management & energy trading.Regardless of the utilisation of the BESS, these upkeep costs are relatively similar year-over-year in real £ terms.The BESS has electricity purchasing costs that include tariffs and charges to interconnect to the electricity grid and provide wholesale power services.These costs include the wholesale energy price during charging of the battery, low voltage auxiliary consumption, balancing services use of system charges, residual cash flow and reallocation cash flow, contract for differences operational levy, and daily service fees.These costs are directly a function of the utilisation and type of balancing service provided by the BESS; thus, they will fluctuate year-over-year as the performance and dispatch of the battery and the wholesale energy price change over its lifespan.Operating costs that have been omitted from this analysis include the DNO fixed charge, the DNO capacity charge, and tolling because they are transfer payments.The DNO fixed charge and capacity charge are transfer fees incurred by every grid-interconnected device to the distribution system of UKPN.In the same vein, tolling is a transfer fee that the UKPN levies on Smartest Energy and Kiwi Power to operate its equipment.Degradation costs are a function of the utilisation and age of the BESS.The algorithm of the degradation model includes cycle frequency, length, and characterization; therefore, providing different wholesale market services may exhibit a unique degradation of the battery.The results from the degradation analysis unveiled that the battery has a Coulombic efficiency of 0.999954 when cycling between 0 and 68% of its depth of discharge .This would result in approximately a 4.6% degradation of the battery cells and pack per 1000 cycles.When the battery reaches 75% of its rated nominal energy capacity, the battery is determined to have reached its full lifespan and needs to be decommissioned, justified by the growth of the battery’s internal resistance and subsequent heat loss .In addition to degradation from the utilisation of the battery, there is also degradation from its age after the manufacturing date.This wear-and-tear affects the energy capacity of the battery cells, packs, and balance of system.SNS calculated that the annual energy capacity degradation per annum was 0.5%.Eq. calculates the energy capacity degradation of the battery and Eq. calculates the lifespan of the battery.Fig. 3 illustrates the degradation of the Smarter Network Storage project over time.Depending on the utilisation of the battery and the annual wear and tear on the system components, the lifespan of the battery ranges from 10 to 14 years.This lifespan is critical to the social cost-benefit assessment of the battery, and the range is incorporated into the Monte Carlo simulations for the social cost benefit analysis to account for the variability and uncertainty in future dispatch and scheduling of the battery.Not only does the energy storage capacity degrade with time and utilisation, but the roundtrip efficiency of the battery degrades as well.At the beginning of life, the AC-AC roundtrip efficiency9 of the battery is 87% , which is largely a function of the BESS and the AC/DC converter.The Smarter Network Storage battery is estimated to experience annual efficiency degradation from the cells, pack, and the balance of system of 1% per annum and 1% per 1000 cycles ; meanwhile, the AC/DC converter experiences slower rates of degradation .The efficiency degradation is critical to assessing the operating costs of the battery because these energy losses require the battery to draw more electricity from the grid to provide equal output services; thereby, resulting in higher operating costs.The BESS also draws power from the grid to operate its auxiliary equipment to monitor the state of charge of the battery, power communication signals with the grid and grid operator, and power the telemetry equipment with the battery operator.This “parasitic load” is 29.2 kW and reduces the rated power output of the BESS .A machine learning approach has been shown to facilitate battery state-of-health diagnosis and prognosis, potentially extending battery lifespans in the future .The social costs of the Smarter Network Storage project vary over time as the industry exhibits economies of scale and the learning curve.Therefore, the costs have been dissected between the costs likely incurred by the Smarter Network Storage project in 2013 and the projected cost decline for identical battery installations deployed between 2017 and 2020.In agreement with a range of studies , the total social costs in 2013 are £10.70 million and drop to between £8.31 and £6.51 million before the end of the decade.Fig. 4 shows the breakdown of each cost component in £/kWh and how the costs are projected to decline over time.The battery cells and balance of system are the two largest cost drivers of current social costs; however, these two components are also expected to witness the greatest cost decline in the near future.For the Monte Carlo simulation, the future costs are presented as a uniform distribution function to reflect that dynamic changes in the costs of the BESS.While most studies provide the 1st and Nth cost of BESS, the approach used in this analysis does not overlook that the real cost of BESS during the transition of the electricity grid can be anywhere between the 1st and Nth cost.EES can provide multiple services to multiple markets.A comprehensive literature review of studies was undertaken to collect the universe of benefits from EES projects.These locational and system-wide benefits are then organized by their beneficiary, including National Grid, OFGEM, UKPN, Developers, Customers, and the Wider Society.The categories that are underlined in Appendix D are classified as true social benefit streams from the Smarter Network Storage project.The Smarter Network Storage project was the first grid-scale storage project in Great Britain to demonstrate the simultaneity of some of these multiple services.However, it is not possible for an EES device to provide all of these aforementioned services simultaneously.Some of these services would double count the benefits of an EES project or the participation in one service would disqualify the EES from participating in another service.The Smarter Network Storage trials verified that certain value streams cannot be bundled together or do not provide net benefits to society.These value streams have henceforth been removed from the calculation of the true social benefits of the battery project.These services are: Enhanced Frequency Response, Short term operating Reserve, Triad Avoidance, Capacity Markets and Reliability & Resiliency.Appendix E provides a short description of these services.Therefore, only a handful of benefits can truly be stacked together in the social cost benefit analysis.These are described in what follows.The system frequency, 50 Hz at equilibrium in Great Britain, measures the balance between that supply and demand.If the frequency falls out of the range of 49.5–50.5 Hz, there may be damage to the power electronics interconnected to the grid.The Smarter Network Storage project participated in providing static firm frequency response rather than dynamic firm frequency response because it was more cost-effective during the trial period.During the trial period, the battery was available for over 7000 h per annum and utilised by National Grid for this service during two separate events.National Grid compensates frequency response providers with an availability payment when the unit is committed to providing frequency response and a utilisation payment for when the unit is dispatched for frequency response.In agreement with National Grid and Perez et al. , the estimated availability payment is £8/MW/h and the utilisation payment is £24/MW/h, and the Monte Carlo simulation used a market price fluctuating ±25% for the availability payment and the utilisation payment, each.Frequency response is the largest revenue stream from wholesale power services, making it a critical feature of the social benefits for grid-scale EES projects.During the course of a day, the wholesale energy market price may fluctuate considerably.The wholesale energy market price, which is assumed to vary between £30/MWh and £50/MWh .EES is able to take advantage of the diurnal price fluctuation by charging the battery during times of low prices and discharging during times of high prices, when the EES is not providing other critical grid services.The Smarter Network Storage project participates in arbitrage for approximately 150 h of discharge p.a. , and the Monte Carlo simulation incorporates a ±15% price fluctuation at the time of buying and selling electricity, each.The results show that the revenues from arbitrage are significant but not enough to justify a grid-scale EES project on their own.The Smarter Network Storage project was designed to defer the need to upgrade the capacity of the sub-transmission line connecting the Sundon Grid to the Leighton Buzzard primary substation.Therefore, there is value in avoiding the cost necessary to upgrade the distribution circuit and this can be directly valued using the counterfactual: the cost of the conventional distribution upgrade.The estimated cost for the conventional upgrade would be £6.2 million .However, building a sub-transmission line would provide an additional capacity of 35.4 MVA and have an expected life of 40 years; whereas, the Smarter Network Storage project only provides 7.5 MVA and has an expected life of 10–14 years.In order to determine the true benefit of distribution deferment, it is critical to determine the length of that deferment.Fig. 5 shows peak demand growth juxtaposed with capacity increases from the Smarter Network Storage project.It is concluded that the Smarter Network Storage provides sufficient capacity to accommodate peak demand growth on the circuit throughout its lifespan.With the battery providing sufficient additional capacity in the near future, the value of distribution deferment is predicated on the lifespan of the battery.The benefit is captured through the avoided cost of the conventional upgrade, which is represented as an annuity of cash flows.The annual cash flows are calculated using the discounted cash flow model illustrated in Eq. with the discount rate equal to the WACC, the present value of £6.2 million, and t = 40 years.The present value of that cash flow over the lifespan of the battery is the value of the distribution deferment, and it is determined that this value is significant in the social cost benefit analysis.Network support is defined as the portfolio of benefits pertaining to reactive power support, power quality, voltage control, and energy loss reduction in the distribution system.Demonstration results from the Smarter Network Storage project proved that it can provide these non-market services to the DNO, thereby providing a tangible benefit of system cost-savings.Therefore, these benefits are calculated through the use of shadow prices to value these non-market benefits.SNS calculated that the value of network support for the Smarter Network Storage project in 2030 would be approximately £48/kW-yr.For the Monte Carlo simulation, this value is determined to be an upper bound for today’s value of network support, with the expected value and lower bound at −15% and −30%, respectively.The results show that network support from batteries provides a relatively valuable service to society.Security of supply ensures the reliability of adequately supplying electricity to the customer .Each peak shaving event is characterised by the duration of the event and the maximum power needed to reduce the demand to appropriate levels.This value is distinctly different from the distribution deferral because it monetizes the wholesale energy market-based benefits associated with peak shaving.During the trial period, the annual amount of peak shaving required was 97 h spread across 45 days.During this time, the maximum power required for peak shaving was 5.7 MVA and the annual energy requirement for peak shaving was 141.6 MVAh .The revenue calculation from peak shaving is equivalent to that of arbitrage; the only difference being that peak shaving is an involuntary form of arbitrage.Therefore, EES charges at £30/MWh and discharges at £50/MWh and the Monte Carlo simulation incorporates a ±15% price fluctuation in each.The Great Britain local electricity price to a customer includes long-run transmission and distribution system costs, but it does not have locational transmission congestion costs and transmission losses .Therefore, local peak shaving is not necessarily coincident with system-wide peak shaving, especially for a heavily congested Leighton Buzzard substation.UKPN valued security of supply with identical variance in energy market prices to arbitrage.Distributed generation is the generation of electricity at or close to the point of consumption and has become increasingly prevalent due to declining prices, customer choice, and backup power.The power grid in Great Britain was designed for uni-directional power flows from centralised generation; however, the advent of DG may create bi-directional power flows on the power grid today.These N − 1 conditions are exacerbated during times of high DG production and low electricity demand, hence DG can be curtailed.EES can increase the capacity to host DG and reduce DG curtailment, thereby creating a social benefit because the EES can effectively stabilize the power system by maintaining a balance of supply and demand in real-time .Both the battery and the conventional upgrade may be able to reduce distributed generation curtailment by increasing the hosting capacity of the distribution circuit; however, only the battery enables bi-directional power flows by absorbing excess DG, such that this social benefit is additional beyond the conventional upgrade.In Great Britain, DG is largely driven by wind and solar, which have a capacity factor of 30% and 11.16%, respectively .Within UKPN’s Eastern Power Network, the curtailment for DG wind and solar is roughly 6% .It has been calculated that grid-scale EES could reduce this curtailment by half .The product of reduced curtailment and the wholesale energy market price of £40/MWh determine the value of the reduced curtailment.Given the large uncertainty surrounding future DG capacity, the Monte Carlo simulations incorporate variability in DG growth, ranging from 5% to 15% per annum, a wholesale energy market price fluctuating ±15%, and initial DG installed capacity between 4 and 8 MW.A carbon price is a price applied to carbon pollution to encourage polluters to reduce the amount of greenhouse gas they emit into the atmosphere.To meet its greenhouse gas emissions reduction goals of 80% from 1990 levels by 2050, Great Britain currently uses the European Union Emissions Trading Scheme and Carbon Price Floor.However, the social cost of carbon is used for this project appraisal because it quantifies the damage costs incurred by society from carbon emissions.The social cost of carbon is determined by the Department of the Energy and Climate Change ,10 and prices are converted to £2013 in Fig. 6.The social cost of carbon is the shadow price for the value of each tonne of carbon dioxide that is abated by the Smarter Network Storage project.It is estimated that this Project abated 1.7 kilo tonnes of carbon dioxide per annum ; therefore, the product between the quantity of carbon abated and the social cost of carbon is equal to the value of this benefit stream.For the social cost benefit analysis, this avoided cost of emitting more carbon into the atmosphere is algebraically represented as a benefit of the Smarter Network Storage project.The Monte Carlo simulations incorporate the variability in the social cost of carbon.At the end of the battery life, there still exists some terminal value of the assets, including the balance of plant and the civil works.Although a secondary market may exist for the battery cells and packs, such a market is not robust enough for this analysis.The balance of plant and civil construction may have a life that is longer than the cells and packs and have a terminal value that is calculable.If the developers of the Smarter Network Storage project were to replace the battery cells and packs, they may not need to replace the entire balance of plant; therefore, there is direct value attributed to these assets.Furthermore, the civil works of the Project was designed to incorporate an 8 MW/17 MWh battery and included a lease for the land for 99 years , creating an option value at the end of the original battery’s life.At the end of the project life, UK Power Networks has the option to install a new battery, develop another alternative solution, or energy efficiency and distributed generation may cause peak demand to fall below the original capacity of the distribution circuit insofar that no upgrade is required any longer.Battery augmentation and repowering would extend the lifespan of the BESS, while maintaining high asset utilisation rates.The option value is especially beneficial during the uncertainty of Great Britain’s clean energy transition because it increases the choices and flexibility for future solutions.The terminal value of these assets is calculated using a straight-line depreciation of 18% per annum .The Monte Carlo simulation incorporates the variability in the life of the assets and a depreciation of ±5%.In order to successfully and realistically provide multiple services for multiple stakeholders, the Smarter Network Storage project developed a Smart Optimization and Control System to optimize revenues from its dispatch and scheduling.The SOCS is comprised of a Forecasting Optimization Software System to forecast demand and remunerative markets, which is the critical first step in optimizing the set of services and revenues generated by the battery because grid services need to be tendered for weeks-to-months in advance.From the forecasts, the BESS then calculates a multiple linear regression model to optimize future battery dispatch in the multiple service markets .Neural networks and machine learning could further optimize battery performance and dispatch to account for both battery and grid state-of-health .The BESS is configured to maximize social value with inputs from FOSS and subject to the constraints of the state of charge and security of supply.The constraints to this optimization of the multiple services are two-fold:State of Charge.State of charge is the measure of the immediate capabilities of the battery and is analogous to a “fuel gauge” for the battery .The battery must be at the required level of charge to provide a distinct service to the grid.Participating in one service may preclude the ability for EES to provide another service because the EES will not be in the required state of charge.Therefore, the state of charge has been included as a constraint when valuing the beneficial services from the Smarter Network Storage project.Security of Supply.The Smarter Network Storage project was designed to offset the need for conventional distribution reinforcement by reducing the peak demand on the existing grid infrastructure.Therefore, the dispatch and scheduling of the project must always prioritize the security of local supply above all other services in order to maintain the reliability on the grid.The EES shall not be dispatched for any other service that may conflict with its ability to provide security of supply, and this constraint is also accounted for when valuing the Project’s beneficial services.The social costs from Section 3 have been calculated for 2013 and projected for 2017 to 2020.The present values of the useful lifecycle costs are presented in Table 1.At any point between 2017 and 2020, the probability of realizing these costs is considered equally likely, thus these values present the bounds for the uniform distribution in the Monte Carlo simulation.This section discusses the assumptions and calculations of the social benefits.The present values of the useful lifecycle assessment are presented in Table 2.The values are presented with a 95% confidence interval to incorporate empirical market data and real-world risk and uncertainty.The eight benefits streams, six cost elements, the time horizon, and the discount rate were all incorporated into the Monte Carlo simulations to determine the NPV of the Smarter Network Storage project.For Figs. 7and 8, the x-axis is the NPV result and the y-axis is the frequency of that NPV result from the Monte Carlo simulations.As evidenced by the difference in the two results, lowering the capital costs through economies of scale is the quintessential driver to improving the NPV of grid-scale EES projects.Fig. 7 shows by way of comparison that, for similar projects installed with the 2013 costs, the expected value would be −£1,484,420 and the median would be −£1,469,634.The standard deviation was £1595,258.Furthermore, the results show that of the 10,000 trials, 1% had a positive NPV and 99% had a negative NPV.These results prove that, in 2013, the social costs outweighed the benefits.Such an investment would not have not likely passed the Kaldor-Hicks criterion, due to then high capital costs of the battery technology.The simulation also shows that a positive NPV would only happen under a limited number of extremely positive outcomes.Fig. 8 shows that, if the project was installed any time between 2017 and 2020, the expected value would be £1,833,887 and the median would be £1,840,840.The standard deviation was £1643,244.Furthermore, the results show that of the 10,000 trials, >99% had a positive NPV and <1% had a negative NPV.These results prove that, for projects to be installed between 2017 and 2020, the social benefits outweigh the costs.The investment in a grid-scale EES project would likely satisfy the Kaldor-Hicks criterion, even with sub-par market prices and shadow prices or a higher discount rate.Sustained investment and production-cost efficiency contributed to a techno-economic performance improvement of approximately £3.3 million between 2013 and 2017–2020."The social welfare generated from EES projects has improved over time and the results show that grid-scale EES can support the electricity grid's transition to a low carbon, reliable, and affordable network.The following discussion offers multi-faceted policies to support EES projects with the objective to improve their NPV and support the future electricity grid.Electricity Market Reforms.EES can provide grid services ranging from power quality and load shifting to bulk power management.To maximize the utilisation of the EES asset, reforms to the electricity market must unlock the potential for EES to provide more simultaneous benefits, such as the capacity market and upward and downward reserves in the ancillary service market.Furthermore, the current ancillary service market compensates suppliers based on power capacity rather than energy capacity.For the Smarter Network Storage project, 10 MWh was required for local security of supply, and the added benefits of building a larger energy capacity are not fully appreciated in the ancillary service market to offset the added costs.This disconnect between cost drivers, reliability drivers, and revenue drivers suggests a need for further electricity market reforms.Demand response can provide grid services ; however, a single battery often cannot participate in demand response markets unless they are aggregated together to reach larger capacity levels.Electricity market reforms are necessary to value EES for both power and energy to align with flexible sizing characteristics of EES project investments.Moreover, electricity market reforms are critical to turning non-market based benefits - such as carbon abatement, network support, and the option value to incrementally increase system capacity - into market-based benefits which align private incentives to invest in EES with their public benefits.Research, Development, and Deployment.The cost decline of EES has been the main driver for the improved NPV of grid-scale EES projects exhibited over time.Operating costs can be lowered by preventing interconnection costs from being applied twice to the battery because the battery is currently classified as a generator and consumer.Soft costs can be lowered through greater standardization of the installation and permitting process.Hard costs can be lowered through greater research and development of battery cells, packs, and the balance of system.Degradation costs can be lowered through a larger database of empirical data that improves battery degradation forecasting, modelling, and mitigation.Investment Risk Mitigation Strategy.Of the plethora of EES project benefits, frequency response is the critical revenue stream to warrant new EES investment.Long-term contracts and price certainty for frequency response services should be a policy focal point to reduce the risk and uncertainty in EES investment returns.The Smarter Network Storage project was the first EES in Great Britain to trial many market services simultaneously, and this diversification of revenue streams can become a risk mitigation strategy to attract future investment in the technology.Optimal Locational Benefits.The economics of EES projects relies heavily on both system-wide and locational benefits.Many ancillary services are compensated on system-wide levels; however, as the future electricity grid becomes more distributed and decentralized, the location-specific benefits will become increasingly important.Locational benefits were critical contributions to the overall success of the Smarter Network Storage project.Optimally siting future grid-scale EES projects in ideal locations can turn projects from a negative NPV to a positive NPV.Analysis of the capacity of the local distribution system to absorb more renewable energy/more demand will help identify the optimal locations to deploy future EES projects.The social cost benefit analysis provides a strong framework to assess whether the regulatory regime should encourage more investment in grid-scale EES.We commend this approach to regulators and those assessing the public benefit of grid scale EES.Our approach draws attention to the fact that positive private NPVs for such storage projects may not be accurately reflecting their true costs and benefits from the point of view of society.This framework accounts for both the market and non-market benefits from the perspective of society and juxtaposes them with the social costs, thereby capturing insights into economic development, equity, and efficiency.Transfer payments between agents within society are removed from the analysis to provide a project appraisal that truly represents the net value to society.Through the Kaldor-Hicks criterion, a positive NPV of the grid-scale EES investment improves the state of society overall.It is also concluded that a Monte Carlo simulation should be paired with the social cost benefit analysis when incorporating the risk and uncertainty of future benefit and cost streams of grid-scale EES.Rather than providing deterministic values, stochastic modelling incorporates the many real-world variables that affect the net present value of a project.For a stochastic sensitivity analysis, Monte Carlo simulations are helpful because statistical distributions can be applied to the benefit and cost streams.The benefit streams from the Smarter Network Storage project are only a subset of the universe of possible benefits emanating from grid-scale EES.Although the Smarter Network Storage project was the first battery in Great Britain to trial multiple market services, some services were not able to be paired together or were not truly social benefits.Key benefit streams of grid-scale EES projects, such as Capacity Markets, STOR, and Triad Avoidance, were concluded to be either not social benefits or uneconomical to perform.Within the social benefit analysis, it is critical to include energy capacity and electrical efficiency degradation.The degradation determines the lifespan of the project, which directly impacts the value of distribution deferment and the terminal value of the asset and indirectly affects the other six benefit streams.Claiming the value of distribution deferment is equivalent to the cost of the conventional distribution upgrade would overstate the true value because the Smarter Network Project is a 10 to 14-year investment; whereas, the conventional distribution upgrade is a 40-year investment.Thus, the value of distribution deferment should be calculated as a fraction of the cost of the conventional distribution upgrade, subject to the timespan of the deferment.The results of the social cost benefit analysis show that an EES project installed in 2013 likely had a negative NPV, but an identical project installed between 2017 and 2020 likely will have a positive NPV.The social welfare generated from EES continues to increase via project cost decline, performance and lifespan improvement, optimizing locational benefits, supportive long-term financial contracts, and favourably reforming electricity markets for grid-scale EES technologies.Project costs can be lowered through streamlined interconnection processes to reduce permitting and installation costs as well as prevent interconnection costs from being applied twice to the battery for generation and consumption.Electricity market reforms can maximize the utilisation of EES through the provision of potential ancillary services, such as enhanced frequency response and coupling those services simultaneously with the capacity market.Properly compensating EES by energy capacity payments and for non-market based services, such as carbon abatement and network support, can diversify revenue streams and reduce the risk in EES project investment.Ultimately, the analysis shows how society can cost-effectively invest in EES as a grid modernization asset to facilitate the transition to a reliable, affordable, and clean power system.
This study explores and quantifies the social costs and benefits of grid-scale electrical energy storage (EES) projects in Great Britain. The case study for this paper is the Smarter Network Storage project, a 6 MW/10 MWh lithium battery placed at the Leighton Buzzard Primary substation to meet growing local peak demand requirements. This study analyses both the locational and system-wide benefits to grid-scale EES, determines the realistic combination of those social benefits, and juxtaposes them against the social costs across the useful lifecycle of the battery to determine the techno-economic performance. Risk and uncertainty from the benefit streams, cost elements, battery lifespan, and discount rate are incorporated into a Monte Carlo simulation. Using this framework, society can be guided to cost-effectively invest in EES as a grid modernization asset to facilitate the transition to a reliable, affordable, and clean power system.
409
High-energy physics strategies and future large-scale projects
The discovery of a Higgs boson at two LHC experiments in 2012 has completed the Standard Model of particle physics .The SM is not a full theory, since there are several outstanding questions which cannot be explained within the SM, e.g. the composition of dark matter, cause of universe’s accelerated expansion , origin of matter–antimatter asymmetry, neutrino masses, why 3 families?,lightness of Higgs boson, weakness of gravity, etc.These questions imply New Physics.Many of them can be addressed through high-energy and/or high-intensity accelerators.At present knowledge the energy scale of the new physics is unknown.While operating at center-of-mass energies of 7 and 8 TeV in 2011–13, the LHC has not uncovered any evidence yet for physics beyond the standard model.Possibly new information will be provided by LHC proton–proton collisions at higher c.m. energy in 2015–18.The next quarter of a century will see the full exploitation of the Large Hadron Collider and its high-luminosity upgrade, as requested by the 2013 Update of the European Strategy for Particle Physics and by the US “P5” recommendations .Recognizing that circular proton–proton colliders are the main, and possibly only, experimental tool available in the coming decades for exploring particle physics in the energy range of tens of TeV, the 2013 Update of the European Strategy for Particle Physics also requests CERN to “undertake design studies for accelerator projects in a global context with emphasis on proton–proton and electron–positron high-energy frontier machines … should be coupled to a vigorous accelerator R&D programme, including high-field magnets and high-gradient accelerating structures, in collaboration with national institutes, laboratories and universities worldwide” in order to be ready “to propose an ambitious post-LHC accelerator project … by the time of the next Strategy update” .In direct response to this European request, CERN has launched the Future Circular Collider study , the purpose of which is to deliver a Conceptual Design Report and a cost review by 2018.The focus of the FCC study is a 100-TeV c.m. proton–proton collider, based on 16-T Nb3Sn magnets in a new 100-km tunnel, with a peak luminosity of 5–20 × 1034 cm−2 s−1.The FCC-hh defines the infrastructure requirements.Given the enormous energy stored in the FCC-hh proton beams, machine protection and collimation pose new challenges, with crystal collimation among the options considered.The FCC study also comprises the design of a high-luminosity e+e− collider, serving as Z, W, Higgs and top factory, with luminosities ranging from ≈1036 to ≈1034 cm−2 s−1 per collision point at the Z pole and t-tbar threshold, respectively, as a potential intermediate step.In addition, the FCC study considers a proton-lepton option, with a luminosity of up to 1035 cm−2 s−1, reached in collisions of 60-GeV electrons with 50-TeV protons.The future results from the LHC could also provide the physics case for a 2–3 TeV Compact Linear Collider .A much smaller CERN programme would be a lepton-hadron collider based on the LHC possibly coupled with a gamma-gamma Higgs factory.CERN is also advancing R&D on proton-driven plasma wake-field acceleration .In parallel to these efforts, the proposed International Linear Collider may proceed in Japan, or China could begin the construction of a 54-km circular Higgs factory.Other large scale facilities, such as a 300-km circular collider, are proposed in the US .The CERN strategy might need to be adapted in response to the worldwide developments and decisions taken elsewhere.In the following we sketch a few aspects of possible future scenarios including possible evolutions of the CERN complex, with some emphasis on potential applications of crystals and channeling concepts.The Large Electron–Positron – LEP-collider at CERN has been the highest-energy e+e− collider in operation so far .Its maximum c.m. energy was 209 GeV, and its peak synchrotron radiation power about 23 MW.LEP operation was terminated in 2000.LHC is the present frontier accelerator, installed in the same tunnel as LEP.It should provide proton–proton collisions at the design c.m. energy of 14 TeV and a luminosity of 1034 cm−2 s−1, achieved with 1.15 × 1011 p/bunch and 2808 bunches/beam.,so that each of the two colliding proton beams contains an energy of ∼360 MJ .The LHC design study began in 1983.11 years later, 1994, the CERN Council approved the LHC project.In the year 2010 first collisions occurred at 3.5 TeV beam energy.For 2015, that is 32 years after the start of the design study, first collisions at close to the design energy are expected.Evidently, now is the time to start preparing a new collider facility for the 2030s or 2040s.The official roadmap for the LHC and HL-LHC extends through the year 2035, by which time 3000 fb−1 of integrated luminosity should be accumulated.Specifically, the HL-LHC operation will be characterized by a constant levelled luminosity of 5 × 1034 cm−2 s−1, and by an event pile-up of about 140.The HL-LHC should produce about 250 fb−1/year.More than 1.2 km of LHC plus technical infrastructure will be modified to render this dramatic performance increase possible.Most importantly, the HL-LHC relies on, and will promote, a technology transition from Nb-Ti to Nb3Sn superconductor for hadron-collider magnets.This change of technology will allow field increases by a factor of up to two .Two prototype dipole magnets have already surpassed the HL-LHC design field of 11 T .The proposed International Linear Collider is a straight e+e− collider, with a total length of 30 km for a c.m. energy of 500 GeV and 50 km at 1 TeV.Its two linacs are based on SC acceleration structures at 1.3 GHz with an accelerating gradient of about 30 MV/m.A Technical Design Report for the ILC was completed in 2012.The ILC technology is being used for European XFEL now under construction at DESY.The present time line foresees a construction start in 2018 and first physics around 2027.The Japanese High Energy Physics community has expressed a strong interest in hosting the ILC .The chosen candidate site is 北上市 in Northern Japan.The proposal is under review by the Japanese ministry MEXT.The European Strategy for Particle Physics was updated in 2013 based on numerous inputs and discussions, including a lively symposium at Krakow the year before.As a result, the top priority of European particle physicists is the full exploitation of the LHC.The second priority is for CERN to undertake design studies for accelerator projects in a global context, with emphasis on proton–proton and electron–positron high-energy frontier machines.This strategy was formally adopted by the CERN Council at a special meeting in Brussels .One response to the Strategy request is the continuation of the design of the Compact Linear Collider , which has been ongoing since the early 1980s, with several significant changes over the years.CLIC is another higher-energy linear e+e− collider, with a total main-linac length of ∼11 km at 500 GeV c.m. and ∼48 km for 3 TeV.A proposed site stretches from Geneva toward Lausanne.The accelerating gradient of CLIC, with a normal conducting warm linac, is 100 MV/m, and hence more than 3 times higher than the ILC gradient, explaining its greater compactness.Key technologies for CLIC are two-beam acceleration where an intense lower-energy drive beam is decelerated to locally generate the RF energy used for accelerating the main beam; the generation of the drive beam, and X-band RF.The CLIC Conceptual Design Report, with about 1400 authors and over 1200 pages, was published in 2012 .As a direct response to the aforementioned request from the European Strategy, CERN has launched the Future Circular Collider Study , with the mandate to complete a Conceptual Design Report and cost review in time for the next European Strategy Update.Presently an international collaboration is being formed with the goal to design a 100 TeV pp-collider together with an 80–100 km tunnel infrastructure in the Geneva area, as well as an e+e− collider as a potential intermediate step, and to also study a p-e collider option.Dipole magnets with a field of about 16 T would allow 100 TeV pp collisions in a ring of 100 km circumference.These parameters represent the study baseline.A similar proposal of a large circular e+e− Higgs factory and later high-energy hadron collider is the CepC/SppC of CAS-IHEP .One of the candidate sites in China is Qinhuangdao, 300 km from Beijing, accessible by car or high-speed train.This region is also known as the Chinese Toscana.Previous studies of large circular collider have been, or are, ongoing in Italy, US and Japan.FCC key technologies include 16 T superconducting magnets, superconducting RF cavities, RF power sources, affordable and reliable cryogenics, as well as novel approaches for reliability and availability.The FCC pp collider opens three physics windows: Access to new particles in the few TeV to 30 TeV mass range, beyond LHC reach; immense or much-increased rates for phenomena in the sub-TeV mass range leading to increased precision w.r.t. LHC and possibly ILC; and access to very rare processes in the sub-TeV mass range allowing the search for stealth phenomena, invisible at the LHC.Table 1 summarizes the baseline beam parameters of FCC-hh and compares them with those for the LHC and the HL-LHC.Noteworthy are the figures for the event pile up – which, at the same luminosity of 5 × 10−34 cm−2 s−1, exceeds the HL-LHC value because of a slightly higher cross section –, the total synchrotron radiation power of close to 5 MW in a cold environment, and the longitudinal damping time of about 30 min.Over the last two decades Nb3Sn high-field magnet technology has made great strides forward, thanks to ITER conductor development, US-LARP and EC co-funded R&D activities and the US DOE core development programme.The High-Luminosity upgrade of the LHC, which is expected to be completed by 2025, includes a few tens of Nb3Sn dipole and quadrupole magnets.The HL-LHC, thereby, prepares the technology base for the FCC-hh.Conceptual cost-optimized designs of FCC 15–20 T high field dipole magnets in block-coil geometry are illustrated in Fig. 2.One particular challenge for the FCC-hh is machine protection, as the energy per proton beam rises from 0.4 GJ at the LHC to 8 GJ for the FCC-hh, an increase by a factor of 20.The FCC-hh beam energy corresponds to the kinetic energy of an Airbus A380 at a speed of 720 km/h.This can melt 12 tons of copper, or drill a 300-m long hole.Directly related to this challenge is the design of the collimation system, which is most exposed to an errant beam in case of a failure.For the FCC-hh collimation an LHC-type solution is the baseline, but other approaches should be investigated, such as hollow e− beam as collimators, crystals to extract particles, and renewable collimators.When crystals are used, either channeling or volume reflection could be taken advantage of.In the channeling mode, a special crystal cut suppresses the dechanneling and can increase the channeling fraction from 85% to 99% .In the volume reflection mode, the multiple volume reflection effect can be used to increase the deflection angle 5 times .The UA9 experiment at the CERN SPS has demonstrated a strong suppression of the nuclear loss rate in the aligned crystal, as is illustrated in Fig. 3.This experiment has also provided a proof of principle for crystal staging.A set of 6 crystals mounted in series was used to reflect 400 GeV/c protons by 40 ± 2 μrad, with an efficiency 0.93 ± 0.04 .Another application of channeling effects and crystals is in the particle-physics detectors.Crystal-based calorimeters can exploit strong-field QED effects to enhance radiation and pair production, leading to reduced radiation length and lower calorimeter thickness, and to an improved mass resolution .The FCC-hh injector complex can be based on the existing and planned injector chain.The High Energy Booster is installed either in the LHC tunnel or in the new FCC tunnel.The injector and also the pre-injectors can feed fixed target experiments, in parallel to serving as FCC injectors.The fixed target physics could be based on crystal extraction .The physics requirements for the interim lepton collider, FCC-ee, comprise highest possible luminosity for a wide physics program ranging from the Z pole to the t production threshold, at beam energies between 45 and 175 GeV.The main physics programs are: operation at 45.5 GeV beam energy for running at the Z pole as “TeraZ” factory and for high precision MZ and ΓZ measurements; 80 GeV: W pair production threshold; 120 GeV: ZH production; 175 GeV: t-tbar threshold.Some measurable beam polarization is expected up to ⩾80 GeV, which will allow for precise beam energy calibration at the Z pole and at the W-pair threshold.Key features are the small vertical beta function at the collision point, βy∗, of only 1 mm, and a constant value of 100 MW for the synchrotron radiation power assumed at all energies.The power dissipation then defines the maximum beam current at each energy.Eventually a margin of a few percent may be required for losses in the straight sections.Table 2 compares the baseline parameters of FCC-ee with those of LEP-2.For operation at the Z pole an alternative parameter set with almost ten times higher luminosity is also included.The latter considers transversely smaller, but longer bunches colliding at 30-mrad crossing angle together with crab-waist sextupoles.Fig. 4 presents the expected luminosity performance per interaction point, assuming up to four IPs in total, as a function of center-of-mass energy.Arc optics exists for the four operational energies and both running scenarios .In all cases the horizontal design emittance is less than half the respective target value, leaving margin for the effect of errors and, possibly, high-intensity effects.Regardless of the collision scheme, the large number of bunches at the Z, W and H energies requires two separate rings, and the short beam lifetime, τbeam, limited by radiative Bhabha scattering at the high luminosity, calls for quasi-continuous injection requiring an on-energy injector in the collider tunnel .Fig. 5 shows the SR energy loss per turn as a function of beam energy .For each collision energy this loss translates into a minimum RF voltage, determined by the overvoltage for a decent quantum lifetime and by the momentum acceptance needed with regard to beamstrahlung.At the t-tbar threshold this RF voltage amounts to about 11 GV, which is the maximum voltage considered for the FCC-ee design.Operation at 500 GeV c.m. would require a larger RF voltage of 35 GV.The RF system requirements are characterized by two regimes, namely operation at high gradient for H and t with up to ∼11 GV total RF voltage, and high beam loading with currents of ∼1.5 A at the Z pole.The RF system must be distributed over the ring in order to minimize energy-related orbit excursions.At 175 GeV beam energy, the total energy loss amounts to about 4.5% per turn and optics errors driven by energy offsets may have a significant effect on the energy acceptance.The FCC-ee design aims at SC RF cavities with cw gradients of ∼20 MV/m, and an RF frequency of 800 MHz.The “nano-beam/crab waist” scheme favors lower frequency, e.g. 400 MHz.The conversion efficiency of wall plug to RF power is critical.R&D is needed to push this efficiency far above 50%.To ensure an acceptable lifetime, the product ρ × η must be sufficiently large, which can be achieved by operating with flat beams, with long bunches, and with a large momentum acceptance of the lattice.The transition from the beam–beam dominated regime to the beamstrahlung-dominated regime depends on the momentum acceptance, as is illustrated in Fig. 6, considering a vertical emittance of 2 pm and βy∗ = 1 mm.The beamstrahlung lifetime is a steep function of the energy acceptance .SuperKEKB with beam commissioning to start in 2015, will demonstrate several of the FCC-ee key concepts, such as top-up injection at high current; an extremely low βy∗ of 300 μm; an extremely low beam lifetime of 5 min; a small emittance coupling of εy/εx ∼ 0.25%; a significant off momentum acceptance of ± 1.5%; a sufficiently high e+ production rate of 2.5x1012/s.SuperKEKB goes beyond the FCC-ee requirements for many of these parameters.Beside the collider ring, a booster of the same size must provide beams for top-up injection.The booster requires an RF system of the same size as the collider, but at low power.The top up frequency is expected to be around ∼0.1 Hz, and the booster injection energy 10–20 GeV.The booster ring should bypass the particle-physics experiments.Upstream of the booster a pre-injector complex for e+ and e− beams of 10–20 GeV is required.The SuperKEKB injector appears to be almost suitable.Polarized beams can be of interest for two reasons : they allow for an accurate energy calibration using resonant depolarization, which will be a crucial advantage for measurements of MZ, ΓZ, and MW, with expected precisions of order 0.1 MeV; and they are necessary for any physics programme with longitudinally polarized beams, which would, however, also require that the transverse polarization be rotated into the longitudinal plane at the IP using spin rotators, e.g. as at HERA.Electron integer spin resonances are spaced by 440 MeV.Possible crystal applications for future e+e− colliders, like FCC-ee, ILC and CLIC, include faster electromagnetic shower generation , consequently smaller electromagnetic calorimeters , generation or measurement of electron beam polarization , enhanced positron sources , and e± crystal collimation .In 2012 a conceptual design report was published for the Large Electron Hadron Collider , which aims at colliding high-energy electrons with one of the two proton beams circulating in the LHC.The two options considered for realizing the lepton branch of this collider are a ring-ring collider with an additional electron ring installed in the LHC tunnel and bypasses around the LHC experiments and a recirculating linac with-energy recovery in a new tunnel of about 9 km circumference, overlapping with the LHC only locally at a single interaction point.Similar two options for FCC: namely the FCC-he could be realized either as a ring-ring collider or as an ERL-ring collider, using the lepton beam from the LHeC ERL or a new facility.The FCC study plan matches the time scale of high-energy frontier physics sketched in Fig. 7.After the kick-off meeting in February 2014, detailed work on the FCC-ee design has started.The wide scope of the FCC study leaves room for many interesting investigations.At present, the study emphasis is shifting toward parameter optimization and the choice between alternatives.Various technologies need dedicated design efforts, such as magnets, SRF, collimators, vacuum system, etc.The FCC study is presently being formalized through memoranda of understanding.More than 40 institutes from around the world, in particular from Europe, Asia and North America, have already formally joined the FCC study.In parallel, an international collaboration board with representatives from all study participants has been set up.At the preparatory collaboration-board meeting on 9–10 September 2014, Leonid “Lenny” Rivkin from PSI and EPFL was unanimously elected as interim Collaboration Board Chair.The first annual FCC workshop will be held at Washington DC in March 2015 , jointly organized by CERN and the US DOE’s Office of Science, and marks an important milestone of the FCC study, namely the end of the “weak interaction” phase.To go much beyond the FCC entirely new concepts will be needed.One promising path is circular crystal colliders, where bent crystals, with an effective field of several 100 or 1000 T, take on the role of dipole or quadrupole magnets in present-day accelerators, as is sketched in Fig. 8.Unlike conventional storage rings where particles are accelerated by raising the dipole magnetic field, in CCCs the bent crystals, defining the ring geometry, are static and the stored charged particles are accelerated instead by induction RF units .Fig. 9 presents a possible evolution of the circular CERN/FCC complex with a 1000-TeV CCC as its final stage.Dielectric materials employed for dielectric-wakefield acceleration would have higher breakdown limits than metal.The dielectric structures, e.g. with an aperture of several 100 nm at λ = 800 nm , would be driven in the THz range, at optical wavelengths or in the near-IR regime, and provide accelerating gradients of 1–3 GV/m.They could be excited by an e- beam or by a laser .Plasmas can sustain even higher gradients, of G ≈ 100 GV/m1/2 with a typical plasma density of n0 ≈ 1017−1018 cm−3.The plasmas could also be driven by lasers or e− beams, and in addition by p beams.The repetition rate depends on the pulse rate of the driver, which for lasers may be up to a few kHz with an accelerated charge of 50 pC per bunch .“Unlimited” acceleration is predicted to be possible .Even more interesting would be acceleration in crystal channels.Here, thanks to the higher density, gradients are even higher, of order G ≈ 10 TV/m1/2 with n0 ≈ 1022−1023 cm−3.The crystal accelerators would be driven by X-ray lasers A maximum energy of the crystal accelerator is set by radiation emission due to betatron oscillations between crystal planes, amounting to Emax ≈ 300 GeV for e+, 104 TeV for muons, 106 TeV for p . ,Operation at 10 TV/m would require a disposable crystal accelerator, while at 0.1 TV/m the crystal accelerator would be reusable.A possible laser drive could consist of side injection of X-ray pulses using long fibers.From the above limit of only 400 GeV, we conclude that e± beams may soon run out of steam in the high-gradient world .To overcome this limit, we must change the particle type, and e.g. use muons instead of electrons to realize a linear X-ray crystal muon collider .Possible challenges would be the muon production rate and the neutrino radiation.The sketch in Fig. 10 illustrates how the neutron radiation could be mitigated by colliding with a natural vertical crossing angle.Both the circular crystal collider and the linear crystal muon collider could move the accelerator energy frontier another 3–4 orders of magnitude toward the the Greisen–Zatsepin–Kuzmin limit characterizing the highest-energy particles impacting Earth from outer space.An ultimate limit of electromagnetic acceleration arises from the breakdown of the vacuum at the Schwinger critical field for e+e− pair creation : Ecr ≈ 1018 V/m e Ecr ∼ mec2, i.e. Compton wavelength times critical field equal the rest mass of an electron–positron pair).Reaching the Planck scale of 1028 eV at the critical field would need a 1010 m long accelerator .In the 1990s this possibility of building a Planck-scale collider has been judged “not an inconceivable task for an advanced technological society” .A bright future lies ahead for accelerator-based High-Energy Physics.The HL-LHC prepares the FCC technology.The Channeling conferences provide tools which can enhance the FCC performance and already prepare for the future machines following the FCC.Several different routes exist toward 10 TeV/100 TeV and 1 PeV collisions, e.g. a linear path: ILC → CLIC → DWAC → XRCMC, and a circular path: FCC-ee → FCC-hh → CCC.Crystals are a key ingredient for the final stages of both routes, where they are used either for bending or for acceleration.Eventually an outer-space solar-system accelerator will be needed to reach the Planck scale.
We sketch the actual European and international strategies and possible future facilities. In the near term the High Energy Physics (HEP) community will fully exploit the physics potential of the Large Hadron Collider (LHC) through its high-luminosity upgrade (HL-LHC). Post-LHC options include a linear e<sup>+</sup>e<sup>-</sup> collider in Japan (ILC) or at CERN (CLIC), as well as circular lepton or hadron colliders in China (CepC/SppC) and Europe (FCC). We conclude with linear and circular acceleration approaches based on crystals, and some perspectives for the far future of accelerator-based particle physics.
410
Enabling the measurement of particle sizes in stirred colloidal suspensions by embedding dynamic light scattering into an automated probe head
Measuring particle sizes and size distributions in colloidal suspensions is of great importance in many technical processes.These processes are necessary for the production of polymers, pharmaceutical active ingredients and additives for the food-, cosmetic- and paint-industry .For a better process monitoring of these technical processes, online measurements of the particle size without time-consuming sampling are needed.Dynamic light scattering is one of the standard methods for measuring particle sizes in fluids and has been established for many years .This method is based on the examination of random particle movement due to constant Brownian motion.The collision of particles with surrounding liquid molecules results in a diffusional process with small particles moving faster than large particles.To monitor this diffusion, the sample is illuminated with a monochromatic laser beam.Depending on the position of the particles relative to each other, light scattered by the particles undergoes constructive or destructive interference.The resulting intensity fluctuations are detected time-resolved by a photomultiplier positioned at a certain angle to the incident light.The decay of the autocorrelation of the measured intensity is correlated to the translational diffusion coefficient, which in turn is used to calculate the hydrodynamic radius of the particles using the Stokes–Einstein equation.For accurate determination of the diffusion coefficient, multiple light scattering has to be avoided.In concentrated suspensions, which are typical in industrial polymer production, the incident light is scattered multiple times before it is detected, which directly influences the intensity autocorrelation.To circumvent this issue, there are different enhancements of DLS, such as two color-cross correlation or 3D cross correlation .Another demand for industrially applicable particle sizing is the possibility of direct in-line measurements.While able to measure in concentrated suspensions, both cross-correlation DLS methods involve complex optical setups and are not suitable for in-line measurements.In contrast, fiber-optic-quasi-elastic light scattering or fiber optic dynamic light scattering methods use an immersion probe with a simple and robust optical setup and are therefore ideally suited for in-line applications.Both techniques are fiber-coupled and collect the backscattered light for particle size determination.With small penetration depths, concentrated dispersions with high solid content up to 40 wt% can be measured .Nevertheless, in-line DLS measurements are challenging, since laboratory as well as industrial reactors typically are stirred.Accurate particle sizing using DLS necessitates a resting fluid in the measured sample.In actively-mixed fluids the diffusion is overlaid by a turbulent convection, which prohibits diffusion measurements.Hence, it is not possible to apply DLS in fluids that exhibit forced convection either due to stirring or even due to sufficiently large pressure- or temperature gradients.Apart from DLS, there are several other methods to determine the particle size in fluids , such as angular resolved static light scattering of particles.This method is capable of sizing of particles with diameters of at least a size several tens of nanometers.To avoid multiple scattering, SLS measurements are usually performed in highly-diluted samples Therefore, SLS is not applicable for in-line monitoring of samples with high solids content.Also commonly used for size determination of industrial polymers or macromolecules is size exclusion chromatography, a method that involves sample drawing and uses small flow rates and hence is not suitable for in-line monitoring.In practice, the aforementioned methods are used either off-line by taking samples from the production line to the laboratory or by installing fully automated on-line trains in bypasses or loops .Sampling and further sample preparation by dilution or separation are time-consuming and induce a delay of up to several minutes between sampling and size information.In contrast, in-line measurements can provide real-time data or only have a delay of a few seconds, depending on the applied method, facilitating possible improvements on process- and quality control.Hence, a particle sizing method capable of measuring in-line in undiluted and stirred suspensions and enabling a close to real-time monitoring of technical processes is desired.By measuring the bulk turbidity of a sample, one can calculate the particle size for monodisperse particles with known optical constants .Since these optical constants can vary during polymerization processes, turbidity measurements are not sufficient for reliable particle size measurements.Optical imaging methods have also been applied in combination with in-line probes in stirred vessels .These methods are based on capturing images using a CCD camera, and as such restricted by the optical diffraction limit and cannot detect nanometer sized particles.A promising approach for in-line particle sizing of stirred turbid colloidal suspensions is Photon Density Wave spectroscopy .Using a special in-line probe, intensity-modulated light is detected at multiple distances between excitation and collection fiber.Since the method relies on strong multiple scattering, it can only be applied for dispersions exhibiting at least a certain level of turbidity and hence cannot be used to monitor the initial stage of particle growth in processes that still exhibit single scattering.This paper introduces a novel probe head to enable in-line FOQELS measurements in stirred colloidal suspensions.To achieve this, the novel probe head and the probe of a commercial DLS device were mounted together and then immersed directly into the reactor.A possible application of such an in-line particle sizing method is monitoring of microgel particle growth in precipitation polymerization process.Microgels are soft colloidal particles formed by cross-linked polymer chains .The particle growth during precipitation polymerization depends on the reaction conditions like concentration of main reagents, crosslinker, surfactant and initiator) as well as reaction temperature.Higher polymerization temperatures and initiator concentrations result in a faster particle growth.A typical microgel synthesis takes between 5 and 20 min from initiation to final conversion .For highly diluted syntheses at low temperatures, the growth of microgel particles inside a cuvette was successfully monitored using static light scattering .Kara et al. conducted microgel syntheses inside quartz cells at room temperature and for temperatures up to 50 °C .Placing the cell inside a UV–VIS spectrometer, they observed a decrease in transmitted intensity during the gelation process.This decrease was attributed to an increase in light scattered by the particles.The scattered light intensity was correlated to the particle volume using Rayleigh’s equation, which is only valid for particles smaller than 0.1 of the laser wavelength .To monitor microgel particle growth at lab production scale in undiluted and stirred systems without being restricted to particles smaller than 40–70 nm , this method is not suitable.The remaining article is structured as follows.In the experimental section the design of the probe head is presented together with commercially available DLS devices used for the in-line measurements as well as off-line validation measurements.Furthermore the experimental setups involving the probe head are described, which consist of stirred and unstirred measurements of a colloidal suspension using particles of a fixed size as well as the application of the probe head for in-line monitoring of the particle growth during microgel syntheses.The result section reports the particles sizes measured using the probe head for both experimental setups along with the off-line reference measurements taken during particle synthesis.The commercial device, which is equipped with the novel probe head is a Nanotrac 250.The measurement principle is shown in Fig. 1.This device is factory-equipped with an immersion probe, which allows an easy access to the sample due to the probe’s small form factor.The light emitted by a laser is guided to the optical probe by a fiber and is focused into the sample close to the protective sapphire window, allowing measurements in highly concentrated suspensions of up to 40 wt% solids.The 180° backscattered light is collected and guided through a fiber to a detector.The laser light reflected by the protective window is also guided to the detector.The scattered light of the particles and the reflected light from the protective window are overlaid on the detector and cause interferences.The time-resolved intensity fluctuations correlate with the motion of the particles from which the particle size distribution is calculated by suitable mathematical models .This device covers a particle size range from 0.8 nm to 6.5 μm.The above described heterodyne method improves the signal to noise ratio especially for small particles, low particle concentrations and enables accurate particle sizing over a large concentration area .The software delivered from the manufacturer controls the device and calculates the particle sizes.To perform measurements in stirred fluids with the Particle Metrix/Microtrac Nanotrac 250, a custom designed probe head pictured in Fig. 2 was developed.The probe head was tailored to be attached directly to the commercial probe, which is then immersed together with the probe head into the reaction fluid.The task of this probe head is to separate a small amount of volume from the bulk fluid, which can then be measured using the commercial probe.This is further called compartmentalization.Additionally, the probe head has to actively exchange the sample volume between two measurements.The requirement for this probe head were compactness, so it can be applied in common laboratory reactors and it had to be robust enough to be used in industrial stirred reaction vessels.The separated sample volume should be as small as possible, it should be exchanged sufficiently fast between each measurement, and most importantly it has to be protected from external motion and stray light.The probe heads outer diameter is 35 mm and its length is 82 mm.The probe head consists of three parts for the enclosure made from stainless steel.A miniature stepper motor with a diameter of 6 mm is secured by a stainless steel holder.The rotor made from polytetrafluorethylen is attached to the stepper motor.There are several sealings in the probe head to keep moisture away from the stepper motor.The custom probe head surrounds the optical DLS-probe of the DLS device and provides a small enclosed sample chamber for the DLS-measurements.The exchange of the sample fluid is carried out by the miniature stepper motor.The miniature stepper motor stops the rotor in a defined position after exchanging the sample so the optical path for the DLS-probe is free and the measurement chamber is properly capsulated.Because of the small sample chamber size the fluid stalls very fast and does not disturb the measurement due to any overlaid motion.The stepper motor is controlled by a motion controller and software to program the motion profile.A Zetasizer Nano ZS was used as an off-line reference device.This device uses classical single-scattering DLS methods to calculate particle sizes in a range from 0.3 nm to 10 μm with a 633 nm laser and detects the backscattered light at 173°.The device features a temperature-controlled holder for a cuvette.The sample in the cuvette needs to be diluted to avoid multiple scattering.Before the probe head is applied for in-line measurements during microgel synthesis, the suitability of the compartmentalization is tested on previously synthesized microgel particles.Therefore, the probe head was immersed into a beaker filled with the colloidal microgel suspension and a magnetic stir bar was added.The beaker was placed on top of a magnetic stirrer.Measurements were conducted with and without stirring.For comparison, the same measurements were repeated without the probe head.The integration time of the scattered light for each DLS measurement was 30 s with six repeated measurements per setup.In order to show that the new probe head can be used for an in-line monitoring of particle growth during a polymerization reaction, a one liter double-walled reactor with temperature controlled oil heating mantle was equipped with the new device.As exemplary application for in-line monitoring of particle growth a microgel synthesis was selected.Two different polymerization reactions were analyzed, namely the particle formation of poly-N-vinylcaprolactam and of poly-N-isopropylacrylamide.The following chemicals were used for the synthesis of the microgels.As monomers N-isopropylacrylamide and N-vinylcaprolactam were employed.N,N′-Methylenebisacrylamide, Cetyltrimethylammoniumbromid and 2,2′-Azobis dihydrochloride were used as crosslinker, surfactant and initiator, respectively.NIPAM and VCL were purified by recrystallization in hexane with subsequent high-vacuum distillation at 80 °C.All other chemicals were used as received.Deionized water was used as solvent.The synthesized microgels have a polymer content of about 1.5 wt%.The exact amount of each chemical used in the two discussed experiments can be found in Table 1.Each microgel synthesis was performed following the procedure described in Pich et al. .After the deionized water was heated to the required temperature, the components monomer, cross-linker and surfactant were added.During the entire process, the reactor was under constant nitrogen atmosphere.After 30 min, the initiator was added to start the polymerization.From this point, the light scattering measurements were started.The integration time of the Nanotrac was set to 10 s.Once a DLS measurement is finished, the sample is exchanged by movement of the impeller, which stops in the pre-defined position before triggering a new DLS measurement.The whole measuring cycle including data processing takes about 30 s.In addition to the in-line measurements, 5 ml samples were taken at certain intervals during polymerization using a syringe with a long needle.For the first ten minutes after initiation, these samples were taken each minute.Between minute 10 and 30, the samples were taken every five minutes.From then on, the samples were taken every ten minutes.The samples were directly cooled down in numbered sample vials placed in liquid nitrogen and an inhibitor was added to prevent further polymerization and particle growth.To confirm the particle sizes measured with the novel probe head, these samples were measured off-line in a Malvern Zetasizer Nano ZS in a fused silica cuvette.Before, the samples were diluted to avoid multiple scattering and filtered through a 1.2 μm PET filter to remove dust.An additional surfactant was added to prevent particle aggregation due to the instability of the particles at the early stage of reaction.The surfactant concentration was below the critical micelle concentration, to prevent any influence on the DLS measurements.The cuvette temperature of the Zetasizer Nano ZS was set to 50 °C and the integration time was automatically set to get a good signal-to-noise ratio.Every measurement was repeated three times and the resulting arithmetic average size was used for comparison to the in-line measurements.For the test of compartmentalization the open beaker setup was used.Four different conditions were examined.First, the bare optical probe of the DLS device was used in stirred and unstirred microgel suspension.The magnetic stirrer was set to 1000 rpm for the stirred measurements.Afterwards, the optical probe of the DLS device was equipped with the new probe head and the measurements in stirred and unstirred microgel were repeated.Fig. 3 shows the results of these measurements.Zone 1 shows the particle size determined from repeated measurements with the bare probe in the unstirred suspension for a size reference.Zone 2 displays the sizes resulting from measurements taken during active stirring.Due to the convection, the particle sizes are erroneously identified to be remarkably smaller.The calculated sizes differ between 70 nm and 350 nm and a reliable value cannot be evaluated.Zone 3 contains particle sizes from measurements using the custom probe head, albeit without stirring.The deviation between the detected sizes is slightly smaller than the deviation without the custom probe head.This smaller distribution of measured particle sizes could be caused by the compartmentalization of the sample, which effectively keeps away ambient light from the detector and restricts even small liquid movements.The particle sizes of measurements with the new probe head under stirred conditions are depicted in zone 4.The particle sizes determined under agitated condition match the values determined in unstirred condition, which ultimately proofs the operational capability of the developed probe head.Table 2 shows the calculated values of the average and standard deviation for each zone.The standard deviation for the measurements with the attached custom probe head are improved by a factor of 2.4–3.2 over the measurements with the bare probe of the DLS-device.The deviations of the averages from zone 3 and zone 4 are within the deviation of zone 1.The results of the measurements of the colloidal microgel suspension show that the design of the custom probe head is suitable to isolate and measure a small volume from the system, thereby preventing any interference of the encapsulated sample with the stirred bulk fluid.The accuracy of the custom probe under stirred conditions is at least as good as the bare probe in unstirred conditions.This renders reliable DLS measurements even in vigorously stirred environments possible.Following the test of compartmentalization in the open beaker, the probe head was applied to monitor microgel syntheses in the temperature-controlled one liter reactor under nitrogen atmosphere.Fig. 4 displays the particle size monitoring during polymerization of PVCL at a reaction temperature of 60 °C.This temperature was chosen since it is the lowest temperature that reliably starts the polymerization of PVCL using AMPA as initiator and hence permits most measurements during the particle growth phase.Each dot in the figure represents one particle size measurement, of which the red4 triangles were visually identified as outliers and excluded for computation of the floating mean, which is determined piecewise over three successive values.The two outliers might be caused by small air bubbles that reached the measurement chamber.The first measurements after the addition of the initiator at time zero show a particle size of 2–3 nm due to background scattering.From minute five onwards, the growth of the particles can be seen very well.Approximately twelve minutes after initiation, the particle sizes reach a plateau at ∼81 nm ± 3 nm SD diameter.The time between the first noticeable change in particle size and the onset of the plateau is about 8 min.The second system monitored using the probe head was a polymerization of PNIPAM at a reaction temperature of 65 °C.The corresponding in-line recorded particle sizes are displayed in Fig. 5.To assess the validity of the particle sizes determined using the novel probe head, samples were taken during a microgel synthesis, directly quenched to stop further reaction and subsequently analyzed off-line.In Fig. 5, the particle sizes measured using the in-line probe head are depicted using black squares and the particle sizes measured off-line using blue dots.Again, red triangles represent visually selected outliers.The slopes of both measurements are in good agreement with each other.While the in-line measured sizes remain at their initial level until minute ten, the off-line measurements are able to depict small particle growth from minute six.Starting with the significant jump to 55 nm at minute 10, the in-line measurements display a constant particle growth, which is in accordance with the off-line measurements.The differences between both measurement techniques present at early reaction times decrease as the reaction progresses.The particle sizes from both techniques reach a plateau of 111 ± 4 nm SD for the in-line and 114 nm for the off-line measurements, respectively.A reason for the decreasing differences between in-line and off-line data could be the integration time of the Particle Metrix/Microtrac Nanotrac 250.Since the particles continue to grow during the acquisition time, the scattering pattern changes as well.The DLS evaluation method then determines a particle size that best approximates the measured signal, which might be distorted due to the shifting scattering pattern.The slower the reaction becomes, the smaller are the differences between both methods.The handling time from taking the sample and stopping the reaction with fluid nitrogen also takes a certain amount of time in which the particles keep growing.After the syntheses, a minor contamination from chemical residue is visible on the surface of the probe head and on the protective window of the optical DLS probe.Examination of the DLS measurements shows no remarkable influence of these contaminations on the detected particle sizes.The residue rather causes a static reduction of the mean scattering intensity and thus might increase the measurement time necessary to retain the targeted measurement uncertainty.Nevertheless, the accuracy reached using the custom probe head in stirred surroundings is in a region that is at least as good as measurements performed with a bare DLS probe in unstirred fluids.A novel probe design has been introduced to enable in-line monitoring of particle sizes inside a stirred fluid based on dynamic light scattering without the need for sampling and dilution.It was shown that the fluid compartmentalization in the probe head is sufficient to exclude any influences of the stirred bulk liquid.Further experiments showed that the growth of microgel particles could be suitably followed during stirred polymerization.A comparison between the direct in-line measurements and off-line determined particle sizes using a state-of-the-art device showed that both measurement techniques are in good agreement with each other.The application of the probe design in industrial environments that currently rely on tedious off-line analysis of drawn samples could vastly facilitate product quality control.In batch productions it could lead to faster reaction times, since the end of particle growth can automatically be detected.It renders possible the discovery of unexpected product conditions and triggering of counteractive measures that might be untraced for considerably longer times with conventional monitoring techniques.
A novel probe head design is introduced, which enables in-line monitoring of particle sizes in undiluted stirred fluids using dynamic light scattering. The novel probe head separates a small sample volume of 0.65 ml from the bulk liquid by means of an impeller. In this sample volume, particle sizing is performed using a commercially available fiber-optical backscatter probe. While conventional light scattering measurements in stirred media fail due to the superposition of Brownian' motion and forced convection, undistorted measurements are possible with the proposed probe head. One measurement takes approximately 30 s used for liquid exchange by rotation of the impeller and for collection of scattered light. The probe head is applied for in-line monitoring of the particle growth during microgel synthesis by precipitation polymerization in a one liter laboratory reactor. The in-line measurements are compared to off-line measurements and show a good agreement.
411
Chondrocyte dedifferentiation increases cell stiffness by strengthening membrane-actin adhesion
Expansion of articular chondrocytes in monolayer culture is required for tissue engineering and cell therapy applications such as autologous chondrocyte implantation.A fundamental problem for tissue engineering is to achieve a sufficient amount of cells required to produce enough tissue to fill the defect site during surgery1.Relatively few methods have been used to obtain a fast proliferation rate of cells needed for these approaches2.Chondrocyte expansion in monolayer is also routinely used in research studies.However, culture in monolayer is associated with dedifferentiation and changes in phenotype.These changes have been demonstrated in both morphological and gene expression levels3.Chondrocyte dedifferentiation in monolayer was characterized by the loss of collage type II and increase in collagen type I gene expression4,5.Previous studies have reported that expansion in monolayer and associated dedifferentiation is driven by alterations in actin cytoskeletal organization.Chondrocytes in monolayer have shown remarkable changes in F-actin structure by forming well defined stress fibres and exhibiting more fibroblast like phenotype6.Other studies report that dedifferentiation in monolayer also induces changes in cellular viscoelastic behaviour determined by atomic force microscopy7.It is unclear what drives these changes in cell structure and mechanics during expansion in monolayer.However numerous previous studies have demonstrated that the mechanical properties of the extracellular environment influence cell mechanics both in 2D8–12 and in 3D13–16.The cellular mechanical properties are known to be influenced by the organization of the actin cytoskeleton17, disruption of F-actin structure by pharmacological agents reduces cell stiffness18.Cortical actin together with the cell membrane is connected via linker proteins such as ezrin, radixin and moesin and play a key role in processes such, cell mechanical properties19.Previously we have shown that the interaction between the actin cortex and the cell membrane regulates changes in stem cell deformation behaviour and apparent mechanical properties during chondrogenic differentiation20.In this study we test the hypothesis that chondrocyte expansion and dedifferentiation in monolayer alters the adhesion between the membrane and the actin cortex and that this regulates the cellular mechanical properties as determined by micropipette aspiration.We use a combination of experimental and analytical modelling and show that culture in monolayer increases the modulus of isolated bovine chondrocytes.We demonstrate that this biomechanical change is mediated by an increase in the strength of the membrane-actin cortex adhesion which reduces susceptibility to bleb formation.This increased cortex adhesion is associated with increased expression of ERM the membrane-actin cortex linker proteins as well as increased cortical actin organization.These studies therefore demonstrate for the first time, that chondrocyte expansion in monolayer increases the adhesion between the membrane and the actin cortex leading to alterations in cellular deformation, bleb formation and mechanical properties."Dulbecco's Minimal Essential Medium, Dulbecco's Modified Eagle Media low glucose; foetal bovine serum, penicillin/streptomycin, HEPES solution 1 M, Trypsin/EDTA, phosphate buffered saline, l-ascorbic acid, l-Glutamine 200 mM, Sigmacote solution, immersion oil, Triton X-100, paraformaldehyde, Sodium chloride, Sodium dodecyl sulphate, Trizma base, IGEPAL CA-630, Tween 20; 4× Laemmli buffer; phalloidin, ProLong Gold; primary antibody rabbit polyclonal ERM, primary antibody rabbit monoclonal phosphorylated Ezrin/Radixin/Moesin; primary antibody mouse monoclonal anti-β-tubulin; 680RD Donkey anti Mouse IgG, 800CW Donkey anti Rabbit IgG; Sodium deoxycholate. "Chondrocytes were isolated from bovine steers and suspended or cultured in Dulbecco's Minimal Essential Medium with 16% FBS, 1% penicillin/streptomycin, 2 mM l-glutamine, 16 mM Herpes buffer, and 0.075 g l-ascorbic acid.Detailed description of cell isolation and culture are presented in Supplementary Information.Micropipette aspiration was used to determine the viscoelastic properties of P0 and P1 chondrocytes as described in previous studies21,22.A peristaltic pump was used to provide precise temporal control over the head of water and hence the aspiration pressure.Micropipettes were made by drawing borosilicate glass capillary tubes with a programmable pipette puller.The micropipettes were fractured on a microforge to obtain an inner diameter of approximately 5–6 μm.The micropipettes were coated with Sigmacote to reduce friction and prevent cell adhesion.The micropipettes then were filled with imaging medium and placed in a holder controlled by a micromanipulator.A cell suspension prepared in IM was placed in a chamber at room temperature on the inverted stage of a confocal microscope with a ×63/1.4 NA oil immersion objective lens.Cells were partially aspirated inside a micropipette by applying a step negative suction pressure of 0.76 kPa in 2 s.A confocal microscope was used to capture brightfield images every 2 s over a 180 s period.Temporal changes in cell aspiration length into the micropipette were measured from the images using MatLab and fitted using well-established standard linear solid theoretical model to estimate the cellular equilibrium moduli, instantaneous moduli and viscosity23.All micropipette experiments were conducted within 1 h following cell trypsinization and detachment from monolayer culture.In addition, the brightfield images were used to estimate percentage of cells showing membrane-actin cortex detachment during micropipette aspiration.To estimate the critical pressure for membrane-actin cortex detachment, micropipette aspiration was repeated with cells subjected to a step negative pressure of 0.11, 0.32 or 0.54 kPa applied at a rate of 0.38 kPa/s and held for 180 s.The percentage of cells showed blebbing at each pressure was observed from brightfield images.The micropipette aspiration protocol was then modified to provide a more precise estimate of the critical pressure required for membrane-actin cortex detachment.For this approach individual cells were subjected to negative pressure in a series of seven increments of 0.147 kPa up to a maximum pressure of 1.03 kPa.The protocol was initiated by applying negative pressure increments at a rate of 0.38 kPa/s with 5 s between increments such that the average pressure rate was 0.024 kPa.The critical pressure required for membrane-actin cortex detachment was taken as the pressure at which a membrane bleb was initiated.All experiments were conducted within 1 h following cell trypsinization and detachment from monolayer culture.Cortical actin in fixed P0 and P1 chondrocytes was visualised by confocal microscopy and quantified as detailed in previous studies13.Western blot analysis was performed to quantify the amount of total and phosphorylated ERM linker proteins.Detailed description of both actin visualisation and western blots are presented in Supplementary Information.Statistical analyses were performed using GraphPad Prism 5 software.Chondrocytes were isolated from two donors for each experiment.Aspiration experiments were repeated at least twice for each condition.Normality testing was performed for all experiments.For non-parametric statistics, the data assumed not Gaussian distributed, was analysed using Mann–Whitney U test and presented as a population with median value indicated by bar."For parametric statistics, the data was assumed as Gaussian distributed and analysed using unpaired Student's t test.Data is presented as mean value with 95% confidence interval.Chi-squared test was used to examine the differences between two or three proportions.Differences were considered statistically significant at P < 0.05 unless otherwise stated.The viscoelastic properties of chondrocytes in suspension were measured using the well-established micropipette aspiration technique.In order to understand the influence of expansion in monolayer and associated dedifferentiation two groups of cells were tested, namely freshly isolated primary chondrocytes and cells cultured in monolayer for 9 days until passage 1.Individual cells were subjected to a step negative pressure of 0.76 kPa applied over 2 s which was then maintained for 180 s. Brightfield images of a representative cell from both groups are presented in Fig. 1.The number of cells tested in each experimental group is indicated in Table I.For both P0 and P1 groups, 84% of chondrocytes were successfully aspirated yielding a characteristic temporal change in aspirated length .As in previous studies20, cells were classified into three different modes of deformation response based on the change in aspirated length from 120 to 180 s as shown in Fig. 1 for representative cells from each mode.Cells for which aspirated length increased or decreased, by greater than 5% from 120 to 180 s were classified as ‘increase’ and ‘decrease’ respectively.Cells that exhibited changes in aspirated length less than 5% were classified as ‘equilibrate’.At both P0 and P1 the majority of cells were classified as equilibrate .However, at P1 there was a reduction in the percentage of cells showing the ‘increase’ mode and an increase in the percentage of cells reaching approximate equilibrium.Furthermore, at P1 about 4% of cells exhibited a retraction in aspirated length which was not observed in any cells at P0.These differences in the percentages of cells exhibiting each mode of response were statistically significant difference.At P1 the aspirated length at 180 s was significantly shorter than that at P0 suggesting that passage in monolayer increases cell stiffness .The temporal change in aspirated length was fitted by the analytical SLS model using Matlab.The percentage of cells for which the SLS model accurately fitted, as defined by an R2 values less than 0.95, was slightly lower at P1 compared to P0 with values of 66% and 75% respectively.Based on the SLS model, chondrocytes at P1 were found to have a greater equilibrium modulus compared to those at P0 with median values of 0.40 kPa at P1 and 0.16 kPa at P0, the difference being statistically significant .There were no significant differences between P0 and P1 in terms of the instantaneous modulus or the viscosity .Further studies were conducted to understand the mechanism responsible for the increase in cellular equilibrium modulus following chondrocyte passage and associated dedifferentiation.Previously we have shown that increased susceptibility to form membrane blebs during micropipette aspiration can cause cells to appear softer with a reduced equilibrium modulus20,21.Thus we hypothesised that the increased modulus for cells at P1 is due to reduced susceptibility to bleb formation and hence a reduction in the number of cells which bleb during micropipette aspiration.At an aspiration pressure of 0.76 kPa, a subpopulation of chondrocytes formed one or more membrane blebs at the leading edge of the aspirated part of the cell within the micropipette .This phenomenon was visible from brightfield images such that blebs appeared relatively transparent and extended rapidly within the micropipette.Where, multi blebbing behaviour was observed, the first bleb was followed by a subsequent bleb initiating from the leading edge of the first bleb.In some cases as many as 3–5 separate blebs occurred in quick succession over the 180 s.More than 80% of P0 chondrocytes demonstrated blebbing compared to 42% at P1 .Furthermore there was a change in nature of the blebbing with all blebbing cells at P0 exhibited only one bleb compared to cells at P1 where 37% of cells showed multi blebbing behaviour, the difference being statistically significant.Supplementary data related to this article can be found online at http://dx.doi.org/10.1016/j.joca.2015.12.007.The following is the supplementary data related to this article:Movie showing multi blebbing behaviour in the aspirated region.The first bleb formation is followed by multiple additional blebs leading to unidirectional like blebbing.Micropipette aspiration was used to determine the effect of monolayer culture on the membrane-actin cortex adhesion strength.Chondrocytes at P0 and P1 were subjected to an aspiration pressure of either 0.11, 0.32, 0.54 or 0.76 kPa applied rapidly at a rate of 0.38 kPa/s .Cells were imaged for the following 180 s and bleb initiation and growth was observed from brightfield images .At the lowest aspiration pressure of 0.11 kPa, only 3% of cells showed bleb formation with no difference between P0 and P1 cells .With increased aspirated pressure there was an increase in the percentage of cells showing blebs.However, cells at P0 had a greater tendency to form blebs compared to cells at P1, the difference being statistically significant at all pressures tested above 0.11 kPa.Interestingly this increased percentage of blebbing cells was due to an additional subpopulation of approximately 30% of P0 cells for which bleb initiation occurred at pressure between 0.11 and 0.32 kPa.To provide an estimate of the critical pressure for membrane-cortex detachment and bleb initiation, the micropipette aspiration protocol was further modified as previously described20.Individual chondrocytes at P0 and P1 were subjected to aspiration pressure applied as a series of seven increments of approximately 0.147 kPa with an overall pressure rate of 0.024 kPa/s .There was a significant difference between P0 and P1 cells in terms of the critical pressure at which membrane-actin cortex detachment occurred, as identified from brightfield microscopy images.Chondrocytes at P1 exhibited a higher critical pressure with median value of 0.74 kPa compared to 0.59 kPa, for P0 cells, the difference being statistically significant .We have previously developed and validated a model for multi blebbing dynamics during micropipette aspiration20.We here use this model to investigate whether the alterations in chondrocyte viscoelastic mechanical properties from P0 to P1 are driven by the changes in membrane-actin cortex adhesion and blebability.The model incorporates constants defining the critical pressure for membrane-actin cortex detachment as well as the combined stiffness of the membrane and actin cortex.We have previously used this model with micropipette aspiration to demonstrate that the increased cell stiffness associated with chondrogenic differentiation of mesenchymal stem cells is driven by greater membrane-actin cortex adhesion20.When the applied pressure is greater than the critical pressure, the membrane is detached from the actin cortex and blebbing occurs.Figure 4 presents a schematic illustration of the three main stages of cell aspiration into the micropipette as covered by this model, namely: initial cell deformation, membrane-actin cortex detachment and bleb initiation, and actin cortex reformation.Here we use the model to predict the temporal change in cell aspirated length during micropipette aspiration .The effective equilibrium modulus is proportional to the effective elastic constant which can be calculated equally well from the simulation for cells showing multiple, single or no blebs.The results of this bleb-based model shown in Fig. 4, demonstrate that the effective equilibrium modulus is dependent on the critical pressure for bleb initiation, plotted here relative to the applied pressure.Cells with a larger critical pressure have a larger effective equilibrium modulus up to a maximum value of ΔPc/ΔP = 0.87.From the experimental data, the critical pressures for membrane-actin cortex detachment for P0 and P1 chondrocytes were 0.59 kPa and 0.74 kPa respectively .Based on these critical detachment pressures, the bleb-based model predicts equilibrium moduli of 0.22 kPa at P0 and 0.41 kPa at P1 .These match closely with the values of 0.16 kPa and 0.40 kPa calculated by fitting the experimental aspirated length data with the SLS model as shown in Fig. 1.Previous studies have shown that the strength of the membrane-actin cortex adhesion and the stiffness of the cell are associated with the organization of the cortical actin25.Therefore the present study examined whether chondrocyte passage in monolayer influences cortical F-actin.P0 and P1 chondrocytes were fixed, labelled with Alexa Fluor555-phalloidin and imaged using confocal microscopy.Fluorescent images of cortical actin in representative P0 and P1 chondrocytes are presented in Fig. 5.At P1 chondrocytes exhibited a thicker actin cortex compared to P0 cells.The quantitative analysis of fluorescent images demonstrated that the fluorescence intensity of the cortical F-actin staining significantly increased from P0 to P1 .In addition, the expression of membrane-actin cortex linker proteins, ERM, were measured using western blotting analysis, in P0 and P1 chondrocytes.The quantitative analyses of total ERM and phosphorylated ERM proteins show that both proteins increased at P1 compared to P0, the differences being statistically significantly.Thus, the increases in ERM expression and cortical actin staining are consistent with the increased membrane-actin cortex adhesion .This study set out to examine the biomechanics of isolated chondrocytes and the effect of prior expansion in monolayer.A variety of experimental techniques have been proposed for measurement of cellular mechanical properties, including atomic force microscopy, microcompression, compression in low modulus 3D gels, optical stretcher and real-time cytometry15,26–31.In the present study we have used micropipette aspiration which has been widely used for the study of chondrocyte biomechanics and is ideal for quantification of the adhesion between the membrane and the actin cortex.Freshly isolated articular chondrocytes and the same cells following 9 days in monolayer culture both exhibited characteristic viscoelastic behaviour when subjected to micropipette aspiration in suspension.This time-dependent deformation was accurately fitted using the SLS model as has been previously reported for chondrocytes22,32,33 and other cell types23,34,35.The resulting instantaneous and equilibrium moduli and viscosity values were similar to published values for chondrocytes7.However, this study shows that the effective equilibrium modulus of chondrocytes increases following culture in monolayer.These changes appear after the first passage in monolayer and are associated with dedifferentiation towards a more fibroblastic phenotype6.In addition chondrocytes cultured in monolayer to P1 exhibited reduced susceptibility to bleb formation during aspiration.This reduced blebbing at P1 indicates strengthening of the bond between the cell membrane and the underlying actin cortex.Previously, the authors have shown that strengthening of this bond causes an increase in the effective equilibrium modulus during differentiation of mesenchymal stem cells towards a chondrogenic lineage20.The present study therefore went on to test the hypothesis that the observed increase in chondrocyte equilibrium modulus with expansion in monolayer is due to increased membrane-actin cortex adhesion driven by changes in cellular structure.Using a modified micropipette aspiration protocol we show that the critical pressure for bleb initiation, i.e., the strength of the membrane-actin cortex adhesion, is increased from P0 to P1.This agrees with the measured reduction in bleb formation from P0 to P1.A theoretical model was used to further illustrate the influence of membrane-actin cortex adhesion on cell deformation during micropipette aspiration.The model also predicted the effective equilibrium modulus based on the critical pressure for membrane detachment and bleb initiation.Using this model with the experimental measurements of ΔPc at P0 and P1, the predicted equilibrium moduli values agree closely with those determined by fitting the experimental deformation data with the SLS model.This confirms that an important contribution to the increased effective equilibrium modulus at P1 is due to the strengthening of the membrane-actin cortex adhesion and the resulting reduction in bleb formation.Future modelling of bleb based phenomena may include the use of more complex particle based models with 3D interface tracking methods36.The mechanism responsible for this change in membrane-actin cortex adhesion may involve changes in either the cortical actin cytoskeleton and/or the expression of the ERM linker proteins; ERM.Using a quantitative confocal microscopy approach, we showed that cortical actin organization in rounded chondrocytes in suspension was more pronounced following culture in monolayer.The effects of this change in actin are unclear.As well as indirectly influencing the membrane-actin cortex adhesion, this increased actin may also directly increase the stiffness of the cell37.Alternatively the increased contractile actin will increase the cortical tension thereby increasing the intracellular pressure which may reduce the applied pressure required for membrane detachment.There was also an increase in expression of both phosphorylated and non-phosphorylated ERM proteins from P0 to P1.This agrees with previous studies in which an increase in ERM expression was associated with reduced bleb formation and increased stiffness in differentiated human mesenchymal stem cells19,20,38.Figure S2 shows a confocal time series demonstrating how initial deformation of the actin cortex during micropipette aspiration is rapidly followed by membrane detachment and breakdown of the underlying actin in an hMSC transfected with LifeACT-GFP.The resulting bleb then expands within the micropipette until a new actin cortex forms within the bleb limiting further extension.Although these images were obtained using hMSCs we suggest that similar mechanisms occur in chondrocytes.In the present study, the increased expression of ERM may lead to the faster reformation of the chondrocyte actin cortex following bleb formation.This may explain the tendency for those cells that did bleb at P1 to display multi blebbing behaviour rather than the single blebbing observed at P0.Thus the study shows, for the first time, that the culture of chondrocytes in monolayer leads to an up-regulation of ERM expression which together with alterations in cortical actin organisation increases membrane-actin cortex adhesion strength.This reduces the susceptibility of cells to form blebs and hence increases the apparent cell modulus obtained by micropipette aspiration.Furthermore, the changes in membrane-actin cortex adhesion during chondrocyte expansion may influence other important cellular functions such as migration39 endocytosis40 or differentiation38.MK, KS, LB and DL designed research, KS and LB performed research, KS, LB, DL and MK analysed data, KS, LB, DL and MK wrote the paper.None of the authors have any competing financial interests related to this paper.
Objective: Chondrocyte dedifferentiation is known to influence cell mechanics leading to alterations in cell function. This study examined the influence of chondrocyte dedifferentiation in monolayer on cell viscoelastic properties and associated changes in actin organisation, bleb formation and membrane-actin cortex interaction. Method: Micropipette aspiration was used to estimate the viscoelastic properties of freshly isolated articular chondrocytes and the same cells after passage in monolayer. Studies quantified the cell membrane-actin cortex adhesion by measuring the critical pressure required for membrane detachment and bleb formation. We then examined the expression of ezrin, radixin and moesin (ERM) proteins which are involved in linking the membrane and actin cortex and combined this with theoretical modelling of bleb dynamics. Results: Dedifferentiated chondrocytes at passage 1 (P1) were found to be stiffer compared to freshly isolated chondrocytes (P0), with equilibrium modulus values of 0.40 and 0.16 kPa respectively. The critical pressure increased from 0.59 kPa at P0 to 0.74 kPa at P1. Dedifferentiated cells at P1 exhibited increased cortical F-actin organisation and increased expression of total and phosphorylated ERM proteins compared to cells at P0. Theoretical modelling confirmed the importance of membrane-actin cortex adhesion in regulating bleb formation and effective cellular elastic modulus. Conclusion: This study demonstrates that chondrocyte dedifferentiation in monolayer strengthens membrane-actin cortex adhesion associated with increased F-actin organisation and up-regulation of ERM protein expression. Thus dedifferentiated cells have reduced susceptibility to bleb formation which increases cell modulus and may also regulate other fundamental aspects of cell function such as mechanotransduction and migration.
412
The role of bone marrow adipocytes in bone metastasis
Bone marrow adipocytes are one of the most abundant cell types found in bone marrow tissue.They constitute approximately 15% of the bone marrow volume in young adults, rising to 60% by the age of 65 years old .Previously considered as inert space filling cells with little biological significance, accumulating evidence demonstrates that bone marrow adipocytes are more than just passive bystanders of the marrow.They have a distinctive phenotype, which resembles both brown and white adipose tissue and are now recognised to have specialised functions .They store and secrete fatty acids, cytokines and adipokines among them leptin and adiponectin, which regulate calorie intake and insulin sensitivity, respectively.Morphologically bone marrow adipocytes are smaller in size than their visceral counterparts; however the net effect of fatty acid uptake is similar due to enhanced triacylglycerol synthesis.They have the potential to influence neighbouring cells by autocrine, paracrine and endocrine signalling making them a powerful player in influencing the bone microenvironment as a whole.Marrow adipocytes and osteoblasts share common progenitor cells, known as bone marrow mesenchymal stromal cells .Their lineage commitment is thought to be regulated by adipogenic and osteogenic factors in the bone microenvironment that activate their respective transcriptional programs.However, in recent years the identification of MSC subpopulations that are thought to be lineage committed has added another level of complexity.The balance between these two cell types appears to play a pivotal role in bone homeostasis and so when the scales are tipped in favour of adipogenesis then by default osteoblastogenesis is negatively regulated.Moreover, there is a building body of evidence to suggest that a subpopulation of adipocytes are generated from bone marrow myeloid cells, posing the question as to how these differ in their function and behaviour to adipocytes generated from MSCs .Furthermore, bone marrow adiposity is also known to inhibit haematopoiesis .There is considerable evidence to support a metabolic role for marrow adipocytes however their influence on the development and progression of metastatic bone disease is only now becoming apparent.The bone provides a unique and supportive microenvironment for a number of solid tumour metastases including breast, prostate and the haematological malignancy multiple myeloma .Cancer cells that intrude into this microenvironment produce various cytokines and growth factors which dysregulate the normal coupling of osteoclasts and osteoblasts.The increased bone resorption releases a number of factors which act positively upon the cancer cells thus perpetuating a “vicious cycle”, a feed-forward cycle that is critical to the establishment of bone metastasis.However, it would be short sighted to think that bone metastases only impinge upon osteoblasts and osteoclasts, as there are many more cells residing in the bone marrow such as fibroblasts, macrophages and adipocytes whose contribution should not be ignored.Bone metastatic cancers primarily occur in older patients whose bone marrow is heavily populated by adipocytes .In recent years there has been building interest in the contribution of marrow adipocytes to metastatic disease.In breast, multiple myeloma and prostate there is demonstrable evidence that marrow adipocytes attract and interact with cancer cells, however the advantages these interactions bestow are still open to debate.Diet-induced obesity has been shown to promote development of a myeloma-like condition, and to increase prostate cancer-induced bone disease .Cancer cells are attracted to adipocytes within the metabolically active red marrow of the bone and these adipocytes interact closely with their neighbouring cells .These observations suggest that an adipocyte-rich environment could fuel disease by creating a permissive favourable niche for cancer cells to establish and progress.Both breast cancer and MM cause osteolytic lesions, inhibiting osteoblast differentiation and thereby tipping the balance in favour of osteoclastic activity.In contrast, prostate cancer predominantly causes osteoblastic bone disease.Interestingly, increased marrow adiposity has been associated with both osteolytic and osteoblastic disease .However, one crucial factor these two processes have in common is the need for energy.Adipocytes are filled with numerous lipid droplets which serve as an effective source of fatty acids when metabolic demand is increased.Podgorski and colleagues demonstrated that lipids can be trafficked between adipocytes and cancer cells fuelling tumour growth and invasiveness by upregulating FABP4, IL-1β and HMOX-1 in the metastatic tumour cells .Adipocytes also support cancer cells in an endocrine manner, secreting growth factors, adipokines and chemokines that lead to tumour survival.In MM factors such as IL-6, TNF-α, CXCL12 and leptin play a role in disease establishment and progression promoting cell proliferation and migration as well as preventing apoptosis .In prostate cancer the chemokines CXCL1 and CXCL2 have been implicated in promoting tumour associated bone disease by upregulating osteoclastogenesis, and in turn promoting tumour cell survival .Recently, breast cancer cells have been shown to be recruited to bone marrow adipose tissue by the secretion of IL-1β and leptin .The abundance of marrow adipocytes in ageing bones may increase the fertility of the bone microenvironment by providing a constant source of energy and growth factors for cancer cells to thrive and progress in these skeletal sites.However, the identification of bone marrow adipocytes as a major source of circulating adiponectin , greater than white adipose tissue, raises the possibility that bone marrow adipocytes may also have anti-tumour functions due to the tumour-suppressive effects of adiponectin.Adipocytes located in close proximity to invasive cancer cells in the primary tumour exhibit profound phenotypic changes that include both morphological and functional alternations and are often referred to as cancer-associated adipocytes.The morphological changes associated with these cells include loss of lipid content and acquisition of a fibroblast-like/preadipocyte phenotype.Functionally they exhibit a decrease in expression of adipocyte-related genes such as adiponectin, FABP4 and resistin coupled with an increase in the production of pro-inflammatory cytokines IL-6, IL-1β .These changes were primarily reported in breast cancer studies in associated with white adipose tissue, however recent in vitro work suggests these changes are also important in the bone marrow .Given the potential tumour-supporting role of adipocytes, targeting these cells either alone or in combination with common therapeutics may be a promising approach.Modulating levels of adipokines such as adiponectin has been shown to exert an anti-tumour effect.Pharmacological enhancement of circulating adiponectin by the apolipoprotein mimetic L-4F was shown to cause cancer cell death in mouse models of myeloma .Due to the increasing importance of lipid metabolism in tumour cell survival, drugs have been developed that target essential molecules of fatty acid synthesis and uptake.Chemical or RNAi-mediated inhibition of key enzymes involved in fatty acid synthesis, including fatty acid synthase , acetyl-CoA-carboxylase and ATP-citrate lyase has been shown to attenuate tumour cell proliferation and induce cell death in a number of different cancer cell lines and mouse models .Approaches that regulate the balance between adipogenesis and osteogenesis may also be effective in maintaining healthy bone homeostasis thereby preventing cancer infiltration.Modulation of the nuclear receptors, glucocorticoid receptor and PPARγ and their respective pharmacological ligands, corticosteroids and thiazolidinediones, directly regulate osteogenic versus adipogenic differentiation of MSCs and so could be targeted accordingly.Another such target is protein kinase C which also promotes osteogenesis and has anti-tumourigenic properties .Investigating these treatment strategies more closely may provide new insight in to which pathways are being exploited by cancer cells in order to evade conventional treatments.Targeting adipocytes as part of a combination therapy may prove to be a valuable tool, however a greater understanding between the balance of tumour-promoting and tumour-suppressive effects of bone marrow adipocytes is required.Over the last few decades the contribution of adipocytes to disease establishment and progression has become clearer.With aging and obesity resulting in increased numbers of bone marrow adipocytes, it is important to further understand the influence these cells are having on their environment.Targeting adipocytes and their products may open new therapeutic avenues in the fight against lethal metastatic cancers.
Adipocytes are a significant component of the bone marrow microenvironment. Although bone marrow adipocytes were first identified more than 100 years ago, it is only in recent years that an understanding of their complex physiological role is emerging. Bone marrow adipocytes act as local regulators of skeletal biology and homeostasis, with recent studies suggesting that marrow adipose tissue is metabolically active, and can function as an endocrine organ. As such, bone marrow adipocytes have the potential to interact with tumour cells, influencing both tumour growth and bone disease. This review discusses the current evidence for the role of bone marrow adipocytes in tumour growth within the bone marrow microenvironment and the development of the associated bone disease.
413
A review of remote sensing for mangrove forests: 1956–2018
Mangrove forests are tropical trees and shrubs that grow along coastlines, mudflats, and river banks in many parts of the earth.They are among the most productive and biologically significant ecosystems because they supply numerous goods and services to the society in addition to benefitting both coastal and marine systems."However, over the last two decades of the 20th century, around 35% of the world's mangrove forests has disappeared, putting mangroves in peril.Because of the harsh environment in mangrove ecosystems, remote sensing has served as a sustainable tool in studies of mangrove forests.For several decades now, with the development of earth observation capacity, RS of mangroves was not limited to mapping their extents, but also in many complex topics, such as biophysical parameters inversion and ecosystem process characterization.To date, over 1300 scientific papers published on various topics in the field of mangrove RS, but the key milestones are not highlighted, so that the developing process, historic contributions, and drive forces are still not clear.To our knowledge, six review papers have focused on mangrove RS since 2010.Among these post-2010 reviews, Kuenzer et al. provided comprehensive overview of all the sensors and methods undertaken in mangrove research, and further discussed their potential and limitations.Heumann and Wang et al. reviewed recent advancements in RS data and techniques and described future opportunities.Purnamasayangsukasih et al. reviewed the uses of satellite data in mangrove RS with the main focus on the abilities, benefits, and limitations of optical and radar imagery data.Giri gave a brief summary of the nine papers published in a special issue, and also emphasized recent improvements of mangrove RS that have been achieved in terms of RS data availability, classification methodologies, computing infrastructure, and availability of expertise.Cardenas et al. intended to challenge scientists to take advantage of all publically available imagery, processing facilities and datasets, then emphasized the need for scientists to acquire programming skills.These reviews could serve as good starting points for researchers who want to learn about mangroves RS.However, there still exist three critical gaps: 1) Most of the existing reviews organize papers according to data types, but not research topics.The only exception is Heumann.Regardless, the chronological evolution of research topics is not discussed.Consequently, it is hard to understand why different research topics on mangrove were proposed in the past, and more importantly, what are the potential research topics in the near future; 2) Key milestones on mangrove RS are not clear.Existing reviews are made based on a large number of published articles, which overwhelms general audience as many of them are overlapped with regards to their topics and methods.On the other hand, it is imperative to understand the main stream of research in mangrove RS.This can be only made available by identification of key milestone associated with distinctive research topic.Specifically, a) who first initiated a new topic, b) in which year, and c) which work received most attention.3) driving forces are not mentioned.The most pressing question for mangrove RS is to predict the future potential research topics.Solution to this question can be only sought by understanding of driving forces to the existing research topics.At this point, none of the review articles attempted to reveal these forces."It's non-trivial to identifying such forces so as to project the future research topic.Based on the above analysis, this article does not intend to make an all-embracing review, but aims to find the skeleton of mangrove RS developing process by organizing scientific papers according to their research topics in the chronological order.For each identified research topic, only the first publications and most cited article will be introduced as the key milestones, and the current state of knowledge will also be given.Thus, the objectives of this study are: 1) to identify key milestones of RS of mangrove forests to provide a historical overview of this research field in the chronological order; 2) to discover key drivers for the evolution of different milestones so as to analyze theoretical developments of mangrove RS; and 3) to project future research directions in mangrove RS.We summarized the historical evolution of mangrove RS in Fig. 1, according to the research topics, RS techniques, and sensors.In the following subsections, we provide more detailed descriptions of the evolution for each decade, namely before 1989, 1990-1999, 2000-2009, and 2010-2018.Mangrove forests are highly productive ecosystems dominating the intertidal zones along tropical and subtropical coastlines.To effectively study mangrove areas and to monitor their changes over time, accurate, timely, and cost-effective mapping techniques are required.The history of mapping mangrove extent with RS data can be traced back to 1970s.Most of the mangrove extent mapping works before 1989 with RS data were conducted without accuracy assessment.Subsequently, two studies of mapping mangrove extent were conducted with accuracy assessment using Landsat TM, SPOT XS or airborne images during 1990–2000.Then, with the accumulation of RS data over the few past decades, some studies about mangrove forest temporal change detections were conducted during 2000–2010."Afterwards, Spalding et al. provided the first truly global assessment of the state of the world's mangroves.Then, several studies of mapping mangrove extent at large scale were following by using medium-low spatial resolution RS images after 2000."In 2017, Chen et al. mapped the spatial extent of China's mangroves.The advantage of this study is that they developed a phenology-based algorithm to identify mangrove forests by analyzing a large volume of satellite images using Google Earth Engine, a cloud-computing platform.Approximately 435 studies on mapping mangrove extent have been published to date.Giri et al. mapped the status and distributions of global mangroves using available Landsat data which leading the number of citations sharply increased.All publications can be grouped into two categories.Before 2011, most of the studies focused on mapping mangrove forest extent by exploring different types of RS data.After 2011, studies aiming to map mangrove extent at large scales has drawn more attention.Leaf area index is one of the most important indicators for predicting photosynthesis, respiration, carbon and nutrient cycling, transpiration and rainfall interception.Most works on LAI estimation before 1990 used ground-based methods, which were extremely time consuming and difficult to acquire the large-scale spatial and temporal variability of LAI, especially over difficult terrains, such as mangrove forests in the intertidal zone.Most of the mangrove forests grow in relatively small patches and linear stands.The spatial resolution of existing satellite RS data before 1990 was low, which could not distinguish these details, and the results were difficult to verify.The emergence of high-resolution satellites image SPOT-1 after 1990 provided the possibility of mangrove LAI inversion.Ramsey and Jensen established the relationship between in-situ canopy spectra and mangrove LAI, which provided reference for the inversion of mangrove LAI from satellite data.Green et al. found SPOT and Landsat image derived NDVI in good correlation with ground measured LAI.This study mapped mangrove LAI with RS image for the first time, which has opened the door to new sources of data to effectively characterize mangrove LAI.Approximately 64 studies on mapping mangrove LAI have been published to date, most of which were after Ramsey and Jensen and Green et al.These publications mainly focused on exploring the potentials of the different types of RS data, which included in-situ hyperspectral data, high-resolution imagery, medium resolution imagery, airborne hyperspectral data, radar data, and most recently unmanned aerial vehicle multispectral images.With the preliminary problem of extent mapping solved to some extent by 1999, the hotspot of mangrove RS studies during 2000–2009 turned to more detailed characterizations, specifically, species classification, vertical structure mapping, and health condition retrieval.The solution of these problems has largely benefitted from the launching of new spaceborne RS sensors, especially those providing global high spatial resolution data.In the previous decade, most research focused on mangrove extent mapping, but was not able to distinguish different mangrove species.The major obstacle is that mangroves of one species usually form narrow strips or small patches, thus not identifiable in satellite images.Airborne high resolution data, although reported useful for mangrove species classification, is site-specific and not available to all areas.The launching of high spatial resolution satellite sensors since 1999 has enabled the efficient mapping of mangrove species in large areas.Wang et al. was the first research to successfully classify mangrove species.Using IKONOS 1-m panchromatic and 4-m multispectral images, three mangrove species along the Caribbean coast of Panama were separated with 70%–98% accuracy.This study demonstrated the necessity of integrating object-based image analysis into mangrove species classification, and has been the most influencing publication on this problem."The critical issue of optimal scale parameter selection for object segmentation was solved by searching for the highest classes' separability.In addition, a comparison between the first high resolution satellites images found that better accuracy was achieved using IKONOS than QuickBird while QuickBird is more affordable.Approximately 310 species-level mangrove RS studies have been published to date, most of which were after Wang et al. and Wang et al.Starting from 2011, the number of publications start to take off, marking the recognition of mangrove species mapping as a mature procedure.These publications can be grouped into three categories.First, one type of works continued on improving mangrove species classification by modifying the algorithm or using new data.Second, some studies investigated for different mangrove species the other parameters such as LAI.Furthermore, with species-level information available, multi-temporal analysis are implemented to study the dynamics of mangroves at individual species level.After achieving some success on mapping the horizontal extent of mangroves in the past three decades, the hotspot of mangrove study has turned to the retrieval of 3D parameters, more specifically the estimation of height and biomass.Biomass, generally defined as the amount of organic matters, can be further used to estimate carbon stock, which is the quantity of carbon in mangroves.The correlation between mangrove structure parameters and RS data has been found significant last century.On this basis, researchers have tried to retrieve the structure and biomass of mangroves using airborne data.However, large scale mapping of mangrove vertical structure has been lacking due to the high density of mangrove trees and roots as well as their flooded habitats.Simard et al. successfully estimated mean tree height and biomass in the Everglades National Park in south Florida using shuttle radar topography mission elevation data and has been the most cited publication on this topic.By calibrating the SRTM elevation with airborne LiDAR data using a quadratic function, mean mangrove height was estimated with 2.0 m RMSE.Subsequently, stand level biomass was estimated from mean height using the linear allometric equation constructed from field surveyed biomass and tree height.To date, 71 articles worked on mangrove height and 157 worked on mangrove biomass, the most of which overlap.Mangrove height is estimated from a canopy height model, which is usually derived from LiDAR data where height is directly measured or image stereopairs where 3D model can be constructed.With height information available, biomass is then estimated from height using predefined allometric equations.To improve the accuracy of biomass estimation, effort has been put into constructing better allometric equations.In addition, the uncertainty analysis of mangrove height products has also drawn attention.Another approach to estimate mangrove biomass followed the inspiration by Mougin et al. and Lucas et al., and estimated biomass according to its relationship with spectral reflectance or radar backscattering parameters.Mangrove carbon stock refers to the amount of carbon stored in mangroves.Depending on the specific task, the ‘carbon stock’ can mean carbon in the mangrove plants or carbon in the mangrove ecosystems.With the increasing awareness of mangroves as an effective long-term carbon sink, the impact of mangroves on global carbon dynamics becomes more and more recognized.As a result, mangrove RS started to estimate the carbon stock.In the new decade, the systematic study of mangrove carbon stock estimation has developed as a new branch in mangrove RS studies.Fatoyinbo et al. tried to estimate mangrove biomass carbon stock, assuming that 50% of the dry biomass is carbon.With the biomass estimated from SRTM derived tree heights, the biomass carbon stock was assessed.However, the accuracy was not assessed.Wicaksono et al. was the first study that focused on the carbon stock mapping of mangrove ecosystems.Both above ground carbon and below ground carbon were calculated from Landsat ETM+ imagery.After comparing different vegetation indices and mangrove fraction derived by spectral unmixing, the maximum accuracy was achieved using the linear regression with global environment monitoring index.For AGC, 62% variation of carbon stock was explained, with standard error of 93.5 Tg C/ha.For BGC, 56.18% variation of carbon stock was explained, with standard error of 26.98 Tg C/ha.Although still in the emerging stage, publications on mangrove carbon using RS have reached 90.Following Fatoyinbo et al., one approach is to estimate biomass carbon stock by assuming 45%–50% of the biomass is carbon.Most studies used this to provide carbon estimate from field surveyed biomass to provide reference data.On the other hand, following Wicaksono et al., one approach is to estimate carbon stock using regression models from vegetation indices and parameters.Mangroves are considered the most productive in all ecosystems, presuming that they are in good health condition.However, when temperature, salinity and other factors are sub-optimal, mangrove plants become stressed, thus their function as the “coastal kidney” gets hampered.It has long been noticed that mangroves of different health conditions can be differentiated from radar.However, the health of mangroves depends on a set of climatological and tidal variables and their interactions.As a result, little attention has been put to monitoring health conditions of mangroves.Kovacs et al. concluded that multi-polarized spaceborne synthetic aperture radar could be used to distinguish healthy and degraded mangroves because a significant correlation between the backscattering coefficients of ENVISAT SAR and LAI was found.LAI was used as the indicator of mangrove health because a distinctive increase of LAI was noticed from the sample white mangrove plots of dead, poor and healthy conditions.In terms of spectral reflectance, Wang and Sousa found out the difference in leaf reflectance between healthy and stressed mangroves.Four band ratio indices were constructed using narrow band reflectance from laboratory hyperspectral measurements.ANOVA revealed that these indices can effectively distinguish healthy and stressed mangroves of the same species.Only 78 studies have been published regarding health conditions of mangroves, which can be separated to two different approaches.First, following Kovacs et al. and Wang and Sousa, vegetation indices and parameters derived from hyperspectral or radar data were used as proxy of mangrove health condition.These indices include but are not limited to LAI, photochemical reflectance index, NDVI, percent tree cover, enhanced vegetation index, and leaf Chl-a concentration.Second, classification methods have also been used to distinguish mangroves of different health status.With the extensive ongoing studies of RS-based mangrove forests mapping and structure inversion, results of mangrove extents, species distributions, and primary parameters are accurate enough to carry out further research.At the same time, the development of mangrove ecological functions and global climate change research has pushed the RS-based mangrove analyses to a comprehensive level.From 2010 to 2018, the most significant improvement of RS-based mangrove research is that mangroves are considered as a coupled ecosystem participating in global carbon cycling and energy balance, and responding to global climate change.These new studies can be concluded to three topics as follows.Carbon flux, defined as the rate of exchange of carbon between pools, directly refers to the global carbon cycling.Due to the high rates of carbon sequestration and the specific position at the terrestrial-ocean interface, mangrove forests are considered to have a unique contribution to global carbon cycling and received significant attention in carbon fluxes research.However, until now only 12 papers focused on RS-based mangrove carbon fluxes.This limited amount is a combination result of challenges associated with in situ flux studies and rarely accessible high temporal resolution RS data.In 2012 and 2013, tower-based CO2 eddy covariance in conjunction with EVI derived from the Moderate Resolution Imaging Spectroradiometer were utilized to estimate seasonal and annual CO2 fluxes and canopy-scale photosynthetic light use efficiency of mangrove forests in Florida Everglades.The model developed in these studies provided the first framework for estimating CO2 fluxes of mangroves using RS data and environmental factors.In 2013, Zulueta et al. measured CO2 fluxes of mangroves, desert, and marine ecosystem from an aircraft which incorporated instrumentation for eddy covariance measurements and low-level RS.They concluded that mangroves showed the highest uptake of CO2.Evapotranspiration is the sum of evaporation and plant transpiration from the Earth surface to the atmosphere.RS has proved to be an effective tool for estimating ET rates and other energy balance parameters in different ecosystems such as agricultural lands and terrestrial forests.However, due to the limitation of data source, very limited studies focused on RS-based estimation of mangrove ET and other energy balance parameters.In 2015, Lagomasino et al. combined long-term datasets acquired from Landsat TM and the Florida Coastal Everglades Long-Term Ecological Research project to investigate ET, latent heat, and soil heat flux of mangrove ecotone in the Everglades.Modeled results from Landsat data were calibrated and tested using the environmental and meteorological parameters collected from the eddy-covariance tower and weather tower, providing relationships between energy and water balance components which also applied to other mangrove systems.Threats to the mangroves from changes in sea-level and temperature are the greatest compared to other factors such as atmospheric composition and land surface alterations.According to Alongi, mangroves would be set landward or disappear due to the continuous rise in sea-level and no change in sedimentary.Furthermore, most mangroves would be degraded, because the areas for mangrove landward migration are already occupied by man-made structures such as ports, dams, and ponds in many parts of the world.However, until now only two RS-based research provided particular discussion on how climate factors impacted mangroves.Due to the lack of long-term continuous climatic variables dataset, most studies did not analyze the relationship between climate change and mangroves.In 2015, Srivastava et al. integrated RS data and meteorological data to assess the impacts of climate change on the mangrove ecosystem.Their results showed that rainfall and sea-level rise significantly affected the extent and density of mangrove species, mean sea level and wind speed were inversely related to mangrove area, and increment of temperatures could cause the mangrove extent to decrease.In 2018, Pastor-Guzman et al. presented the first regional characterisation of mangrove phenology, and concluded that cumulative rainfall in cold and dry season has a direct impact on mangrove phenology.As a unique type of forest, mangroves are found along the coasts of tropics or subtropics, occupying only 0.4% of global forests.To detect driving forces of mangrove RS development, we assume that research of mangrove RS have certain relations with research of forest RS.In total, 1208 mangrove RS papers and 37,152 forest RS papers were published to date.Basically, current literatures on mangrove RS can be divided into three sub-fields depending on the complication of ecological issues that can be addressed by RS applications: mangrove distribution mapping, biophysical parameters inversion, and ecosystem process characterization.This study compares the evolution of mangrove RS with terrestrial forest RS in the abovementioned three aspects.Vegetation distribution mapping is a traditional and essential task of RS.According to our literature survey, mangroves distribution mapping can be concluded into two stages: extent mapping and species distribution mapping.Historically, extent mapping of both mangroves and terrestrial forests were first conducted using aerial photography before 1970.Then, the development of satellite sensors promoted extent mapping to individual species level.Terrestrial forests species mapping started from early 1970s, but studies with acceptable classification accuracy were published around 1985 by interpreting Landsat TM imagery.Although terrestrial forests species can be distinguished from Landsat TM, these data were unable to discriminate mangrove species.This is probably due to the coarse spatial resolution of Landsat TM and the patchy growth forms of mangrove stands.The first high accurate mangrove species mapping paper was published after the launch of high resolution satellite sensors, Wang et al. used high resolution satellite imagery of IKONOS to map mangroves species in Punta Galeta, Panama, and achieved an average accuracy of 91.4%.Vast mangrove species mapping research have appeared since the successful of this study, most of which are based on high resolution satellite imagery.Therefore, we conclude that the huge time lag between terrestrial forest species mapping and mangrove species mapping is caused by the availability of proper RS data.In other words, the development of mangrove distribution mapping is driven by sensor progress.Forests biophysical parameters are important for studies of the carbon cycle and global climate.According to our literature survey, mangroves biophysical parameters inversion can be concluded into two types: LAI inversion, and biomass estimation.The time lag between the first remote-sensing-based terrestrial forests LAI research and the first mangrove LAI research was not long.The first RS based study focused on forest ecosystem was published in 1987.Shortly after this, Jensen et al. did an intensive in situ sampling of mangroves in Florida in 1988, and related mangrove canopy LAI to a vegetation index generated from the SOPT XMS sensor.RS based terrestrial forests biomass estimation started from 1987.Wu suggested a potential application of multipolarization SAR data for pine-plantation biomass.Mangrove biomass estimation was first published by Mougin et al., 1999, using multifrequency and multipolarization Polarimetric AIRSAR data to retrieve information on the structure and biomass of mangroves in French Guiana."Although mangroves' high productivity and essential role in supplying organic materials to coastal ecosystems was conscious since 1980s, the time lag between studies of terrestrial forests biomass and mangrove biomass is notable.This lag can be explained in two aspects: first is the lack of fundamental ground truth information due to the numerous difficulties encountered during field studies in coastal environments; second and the most important is the lack of proper RS data used airborne data which is rarely acquired).Therefore, we conclude that the time lag between terrestrial forest biophysical parameters inversion and mangrove biophysical parameters inversion is caused by the availability of proper RS data.In other words, the development of mangrove biophysical parameters inversion is driven by sensor progress.Critical research problems involving forest response to global change require characterization of ecosystem processes.Current RS based research of mangrove ecosystem processes include carbon fluxes and evapotranspiration.Studies of RS-based estimation of terrestrial forests ecosystem processes were published since the late 1980s.Running et al. mapped regional forest ET by combining satellite data with ecosystem simulation.Waring et al. used seasonal RS data and longtime meteorological data to estimate forests CO2 exchange in Harvard Forest.Although many field and greenhouse studies have investigated the rate and mechanisms of mangrove productivity, to date only a few studies have been conducted to use RS for the estimation of carbon and water exchanges in mangrove ecosystems."The first studies focusing on mangrove's carbon fluxes and ET were published in 2012 and 2015, respectively.Both studies were conducted based on long term medium resolution RS data and carbon fluxes tower data acquired from the only mangrove tower in Everglades National Park.The huge time lag between terrestrial forests and mangrove forests can be explained in three aspects: 1) The total area of mangroves is small, occupying only 0.4% of global forests.As a result, their role in global carbon cycle was neglected in the early time.2) Traditional high temporal resolution satellite data with coarse spatial resolution, such as AVHRR, were not suitable for mangrove studies, because mangrove pixels in these images are often mixed with other coastal land covers.3) Ecosystem process characterization needs amount of field work, especially long term critical field measurements of carbon and water fluxes.To date, there are 177 terrestrial forest carbon fluxes towers enrolled in FLUXNET with the first tower built in 1990.However, there is only one mangrove carbon fluxes tower which was built in 2003.Therefore, we conclude that the time lag between terrestrial forest ecosystem process characterization and mangrove ecosystem process characterization is caused by the availability of carbon fluxes tower and appropriate RS data.In other words, the development of ecosystem process characterization is driven by data accessibility.As discovered in the previous section, sensor advancement has led to emergence of key milestones in the history of mangrove RS.Although a significant number of remote sensors have been launched in the last decades, an unparalleled amount of new sensors have been set forth to launch in the years to come.As such, in the following section, we share our insights on how new opportunities will arise for six existing research topics as well as a new one.Mangrove forests mapping is the basis of other mangrove RS topics.Although extent mapping has been studied for more than 60 years, there are still great challenges and opportunities.In our opinion, two major improvements can be made in the future research.Conducting dense-temporal and fine-spatial resolution global mapping.In 2011, Giri et al. mapped global mangrove forests for the first time using RS images, which demonstrated substantial advancement toward global mangrove monitoring efforts.In 2016, Hamilton and Casey created a 30 m spatial resolution annual global mangrove database from 2000 to 2012.However, the spatio-temporal resolution is rather coarse.Two recent developments in the earth observation sector have the potential to significantly improve the efficacy of mangrove monitoring across the globe.First, the European Sentinel-2A and 2B satellites comprise the global multi-spectral mission whose data is open to the global public.Launched by the European Space Agency in June 2015 and March 2016, respectively, these two satellites provide 5-day repeat and 10 m spatial resolution imagery globally, enabling high spatio-temporal monitoring of mangrove forests.Second, the novel computing platform of GEE, which houses a complete and continually updated archive of pre-processed Sentinel-2 data, has enabled the efficient development of global-scale data products.Considering the tidal influences.Mangrove extent monitored by satellite RS could be varied depending on the instantaneous tidal level at the time satellite images were taken.Although this limitation has been proposed for more than 15 years, we still lack understanding about how tidal level affects the reflectance of mangrove forests.In recent years, the wide use of flexible UAV offers great opportunities to quantitatively address the effects of tidal height on spectral reflectance.UAVs can be used to acquire images of mangroves at almost any time during local flood and ebb tide.Therefore, we could estimate mangrove extent by combining spectral reflectance from satellite images and instantaneous tidal height from UAV.Furthermore, if so, current mangrove maps would be effectively improved.It should be noted that UAVs also have disadvantages such as limited aerial extent and relatively lower steadiness compared to other RS platforms.So we recommend using UAV to collect data at small areas to facilitate large-scale projects.Composition and distribution of mangrove forest species are essential for conservation efforts and further mangrove investigation.In our opinion, two major improvements are feasible.Continental- or global-scale species distribution mapping.To date, all mangrove species mapping studies were conducted in local scales, but continental or global-scale species distribution results are unavailable.There are two major barriers.First, due to the frequent clouds and cloud shadows in the mangrove swamps, high quality fine-resolution RS data that fully cover a large-scale are difficult to acquire, even commercially.Second, operating algorithms on a large number of image archives requires specialized expertise and software, powerful computing facilities, and significant time dedication.Two recent developments in the earth observation sector have the potential to significantly improve large scale mangrove species mapping.First, better multi-source data can be combined.Dense series of multispectral satellite data provide a good basis for the large-scale mapping of mangrove forest composition, while further data may be added from recently launched SAR missions such as Sentinel-1 SAR.Although significant increase in accuracy is not guaranteed by adding SAR, the free availability of most of the data could be a motivation to investigate toward such approaches.Second, the novel cloud computing platform of GEE, with its large archive of pre-processed satellite datasets and its powerful parallel computing capacity, further facilitates large-scale mangrove species mapping.Distinguishing more mangrove forest species.Globally, there are over 100 species of mangroves.Vaiphasa et al. proved that at least 16 mangrove species could be distinguished by six hyperspectral channels.However, in most published RS applications, no more than five species were discriminated.The recently available dense series of multitemporal Landsat-8 and Sentinel-2 data better capture mangrove phenology, which could possibly assist species discrimination.However, whether phenology information can be used to reliably identify mangrove species remains a question to be explored.Building a spectral library.The spectral characteristics of different mangrove species have not been fully defined.Therefore, to assist the species classification, we call for researchers in mangrove RS to collectively build a definitive spectral library of mangrove species under various environmental conditions.It should be noted that mangroves under some environment conditions are not accessible.To collect the hyper-spectra of those mangroves, we suggest mounting high resolution hyperspectral sensors on UAVs.Nevertheless, UAV hyperspectral RS is an emerging protocol, the robustness of which still needs improvement.LAI is one of the most significance indicator of primary productivity in mangrove wetland ecosystem, associated with many biological and physical processes of mangrove.Current RS-based methods for retrieving LAI can be grouped into two categories according to the types of RS data: passive optical and active LiDAR.However, both of them remain critical obstacles for the inversion of mangrove LAI.Passive optical RS-based methods.Although successful inversion of mangrove LAI with passive optical RS images have been reported in many studies, the challenges associated with the interference from complex background and various mangrove species have not been effectively controlled yet.Most of the existing studies applied to extracting LAI has the common characteristics that the species is singular and the background is homogeneous.However, in the mangrove forest, it is likely that both of the background and species are various.UAV platforms provide various types of very high spatial resolution RS data at flexible acquisition time intervals, which offers terrific opportunities to eliminate the effects of background and species in the estimation of mangrove LAI.In addition, some satellite RS images at relatively lower spatial resolution but higher spectral resolution than UAV images also have great potentials for solving the background and species issues.Airborne LiDAR can provide detailed forest vertical dimension information estimation.However, it is often logistically difficult to use airborne LiDAR for multi-temporal and large-scale forest monitoring.The first spaceborne LiDAR system, Geoscience Laser Altimeter System, has been successfully used for collecting repetitive and extensive forest LAI.To the best of our knowledge, GLAS has not been applied on retrieval of mangrove LAI to date because of its sparse spatial distribution.Ice, Cloud, and land Elevation Satellite-2 and Global Ecosystem Dynamics Investigation LiDAR have been launched in 2018, which will generate a large amount of spaceborne LiDAR data at a frequent revisit.Therefore, it is worthwhile to explore new methods for estimating mangrove LAI at a continental- or global-scale with spaceborne LiDAR data in the near future.It should be noted that, besides the passive optical RS-based and LiDAR RS-based methods, radar data was also utilized for mangrove LAI inversion in some existing studies, e.g.However, the high moisture content in mangrove forests has hindered research on leaf area index inversion with radar data.The main obstacles for RS-based retrieval of mangrove structure, biomass and carbon stock include: only a small number of structure parameters are estimated; and ground truth data is hard to collect.We consider that the research can be improved in the following three aspects.Individual tree detection methods have been widely used to count and measure individual trees to build forest inventory datasets, but are rarely applied to mangrove studies.Previous studies have shown that individual tree characterization can increase the accuracy of forest parameter estimation.Therefore, to assess mangrove structure and biomass at individual tree level may lead to great improvement of mangrove parameter estimation, but has largely been limited by the relatively low spatial resolution of datasets.With the increased spatial resolution of RS data, especially the use of UAVs, individual mangrove characterization is worth investigating.Recently, the first UAV LiDAR-based individual mangrove delineation work has been published.Yet more individual mangrove studies are encouraged to test the robustness of the algorithms and to improve the accuracy.Retrieving more structural parameters.LiDAR technique has been quickly advancing, with increased point density and decreased cost.Therefore, mangrove structure can be represented with more details.If more parameters besides tree height are retrieved from the LiDAR datasets, the 3D structure of mangroves can be described more comprehensively, which may furthermore lead to improved estimation of biomass and carbon stock.The estimation of biomass and carbon stock rely heavily on allometric equations.Because the accurate measurement of biomass requires destructive field surveys, which is not encouraged for the already rapidly disappearing mangroves, the equations are usually borrowed from other studies.However, studies have shown that the allometric equations vary with species and locations.Therefore, we call for the mangrove research community to enlarge the pool of publicly available standardized ground truth datasets.Mangrove health analysis, compared to other mangrove problems, is relatively less conducted using RS.We consider that mangrove health research may be further developed in the following two aspects.Previous studies are mostly based on multispectral or hyperspectral imagery.Laser-induced fluorescence LiDAR, the effectiveness of which on vegetation monitoring has been confirmed two decades ago, can provide another efficient tool for mangrove health analysis through leaf chlorophyll concentration estimation.In addition, with the ability to accurately estimate the chlorophyll concentration, LIF-LiDAR can be used in field survey to provide more detailed validation data for mangrove health status monitoring.Red-edge reflectance from satellite images.Many of the vegetation indices for mangrove health analysis use reflectance around 700 nm, which are usually available from hyperspectral datasets.However, hyperspectral datasets are often lacked and not easy to collect for large areas.The recently launched Sentinel-2 satellite carry multi-spectral sensors that collect reflectance data at three red-edge bands, which provide essential information for mangrove health analysis.With its 20 m spatial resolution and 5-day revisit frequency, utilizing Sentinel-2 images will facilitate the timely large-scale monitoring of mangrove health.Carbon and ecohydrology flux are important for understanding the ecosystem process of mangrove forests.RS has proved to be an effective tool for estimating carbon flux and ecohydrology.However, compared to other ecosystems, RS based carbon flux and ecohydrology studies in mangrove forests were rarely conducted.There are two major obstacles: difficulties in acquiring the ecosystem flux data, and difficulties in field survey.In our opinion, recent progress in RS and in-situ instrument may offer two great opportunities in carbon flux and ecohydrology. Satellite drives large scale carbon flux estimation."Now, global carbon emissions are monitored from space, by three pioneering satellites: NASA's Orbiting Carbon Observatory-2, which was launched in 2014 and measures CO2, Japan's Greenhouse Gases Observing Satellite, which was launched in 2009 and observes CO2 and methane, and China's TanSat, which was launched in 2016 and examines carbon sources with extremely high precision.Scientists are still trying to figure out how to track greenhouse gases from space.Meanwhile, a new series of satellites have been lined up to support a larger monitoring effort.Japan launched GOSAT-2 in 2018.NASA is preparing OCO-3 for launch in April 2019.All these satellites could serve as main data sources of global mangrove carbon flux estimation. In-situ flux tower drives high precision local Carbon flux and ecohydrology estimation.To our knowledge, except one mobile flux platform study, all RS-based mangrove carbon flux and ecohydrology studies were conducted in Everglade state park, where there is a carbon flux tower.Recently, more and more mangrove carbon flux towers are built worldwide, such as Sundarbans, Zhangjiangkou, Zhanjiang, etc.These towers could serve as high precision data source in local carbon estimations.Significant advances in the field of RS of mangroves were identified in the benefit of the development of earth observation capacity.While recent advances have used some new RS data for existing mangrove research topics, there remain opportunities to explore new topics.One new topic in RS-based mangrove forests research that we suggest is to map mangrove productivity.Mangrove forests have been considered to be high productivity ecosystem for a long time.However, compared to other ecosystems, less studies focused on mangrove productivity.Moreover, no RS-based researches has been conducted to map mangrove productivity.Nowadays, with intensive mangrove in-situ surveys and more flux towers, great opportunities have been offered to the research of mangrove productivity mapping.In this review article, we identified key milestones in mangrove RS by associating emergence of major research topics with occurrence of new sensors in four respective historical phases, i.e. before 1989, 1990–1999, 2000–2009, and 2010-2018.For each identified research topic, an in-depth theoretical understanding was achieved by analyses of both the first published article and most-cited article.Based on the analyses, current state of knowledge as well as existing limitations was summarized.In addition, in order to gain insights on driving forces for emergence of new research topics, we compared the chronological evolution of mangrove RS with that of terrestrial forest RS.Interestingly, we found out that key research topics in mangrove RS repeats those of forest RS yet with varying time lags.This can be attributed to the following two facts: 1) mangrove forests often appear as more elongated patches than terrestrial forests; 2) field work is more challenging in mangrove habitat."Along with the remote sensors' advancement, various topics that had been studied in terrestrial RS were later transformed to mangrove studies.Based on the projected growth of foreseeing earth observation capacity, insights on future research directions in mangrove RS are also presented.
Mangrove forests are highly productive ecosystems that typically dominate the intertidal zone of tropical and subtropical coastlines. The history of mangrove remote sensing (RS) can be traced back to 1956. Over the last six decades, hot spot topics in the field of mangrove RS have evolved from mangrove distribution mapping, biophysical parameters inversion, to ecosystem process characterization. Although several review articles have been published to summarize the progress in this field, none of them highlighted the key milestones of historical developments pertinent to major research topics or key drivers that stimulate such milestones. In this review, we aim to identify key milestones in mangrove RS by associating the emergence of major research topics with the occurrence of new sensors in four historical phases, i.e. before 1989, 1990–1999, 2000–2009, and 2010-2018. For each identified research topic, an in-depth theoretical understanding was achieved by analyses of both the first published article and the most-cited article. Based on the analyses, the current state of knowledge as well as existing limitations were summarized. In addition, in order to gain insights on driving forces for emergence of new research topics, we compared the chronological evolution of mangrove RS with that of terrestrial forest RS. Interestingly, we found that key research topics in mangrove RS replicated those of forest RS yet with varying time lags. This can be attributed to the following two facts: 1) mangrove forests often appear as more elongated patches than terrestrial forests; 2) field work is more challenging in mangrove habitat. Along with the RS sensors' advancement, various topics that had been studied in terrestrial forests were later transformed to mangrove studies. Based on the projected growth of foreseeing earth observation capacity, insights on future research directions in mangrove RS were also presented.
414
Positive selection of AS3MT to arsenic water in Andean populations
High levels of arsenic in drinking water are found in countries all over the world .Arsenic mainly originates from minerals in the ground and enters the food chain through drinking water and food sources such as crop plants .Anthropogenic actions like mining and pesticide use contribute to elevated levels of arsenic .Long-term exposure to arsenic can result in cancer, skin lesions, as well as cardiovascular and pulmonary diseases .However, not only at a later stage in life but already at an early age arsenic exposure can have drastic consequences.Arsenic can cross the placental barrier and thus affect the foetal development.Arsenic alters immune response modulator concentrations measured in breast milk as well as in newborn cord blood .Subsequently, high arsenic intake by drinking water in early childhood increases the risk of respiratory infections and diarrhea in infants as well as liver cancer associated mortality .This suggests that populations exposed to high levels of arsenic over long periods of time may possess some kind of protection against arsenic toxicity.In the body, inorganic arsenic is modified to monomethylarsonic acid and subsequently to dimethylarsinic acid by methyltransferases.The second reaction occurs much faster due to an increased substrate affinity of the enzyme for MMA and therefore DMA is the predominant end product of arsenic metabolism .Inorganic arsenic, MMA and DMA are excreted in the urine and can be used to measure arsenic metabolism.The most toxic arsenic product is MMA; thus, the first step in the arsenic metabolism is considered to be rather an activation than a detoxification of arsenic .Hence, low levels of MMA in comparison to DMA in urine are beneficial to reduce its toxicity .In the highlands of Northwest Argentina, the Puna, high levels of arsenic in water have been present since many thousands of years .In some locations levels exceed the maximum safe level set by the WHO of 10 μg/l by a factor 20 .San Antonio de los Cobres, in the heart of the Puna region, is one of such localities .Yet, its inhabitants show unusually low levels of excreted MMA metabolite relative to DMA and inorganic arsenic .In agreement with this observation, Puna highlanders show increased frequencies of arsenic methyltransferase alleles that have been associated with low MMA urine concentrations .Allele differences in Collas were associated with enzyme expression levels and resulting concentrations of arsenic metabolites.Lower levels of MMA were found in Collas compared to Bangladeshi , Chinese or Tibetans exposed to permanently elevated arsenic levels in drinking water.Genes responsible for the metabolism of arsenic, therefore, may have been targets of strong positive selection among these populations.Levels of MMA and DMA have been recently associated with various SNPs near AS3MT in women from the Colla population of San Antonio de los Cobres in the Argentinean Puna region .Moreover, an allele frequency based selection test applied on genome-wide genotype data in the same study suggested AS3MT as one of the main candidates of selection in this population.In this study, we investigate the strength of the selection pressure exerted by elevated arsenic levels on the genome of a different subset of men and women from the Colla population from San Antonio de los Cobres and surrounding villages.We use two neighboring groups, the Calchaquí and the Wichí as control populations.We also assessed genome-wide genotype data using distinct allele frequency based selection test and were able to confirm strong signatures within and near the AS3MT gene, thus underlining the key role of this gene in the adaptation to environmental arsenic.Individuals with indigenous ancestry from three regions of the Northwestern Argentinean province of Salta were recruited to participate in this study in April 2011: Collas from the Andean Plateau or Puna, Calchaquíes from Cachi in the Calchaquí valleys at 2300 m and Wichí from the plains of the Gran Chaco region near Embarcación.We used our previously published data for 730,525 single nucleotide polymorphisms genotyped in 25 Collas, 24 Calchaquíes and 24 Wichí.In the Colla sample, 16 individuals were from San Antonio de los Cobres, where arsenic levels reach 214 μg/l ; 7 were from Tolar Grande with arsenic levels of 4 μg/l and one individual was from Olacapato, where arsenic levels are 12 μg/l.Arsenic concentrations for the exact sampling locations in the Gran Chaco regions were not available, however in surrounding locations arsenic concentrations were measured to be: Las Varas 0 μg/l, Pinchanal 19.5 μg/l, General Ballivián 4 μg/l, Tartagal 2.3 μg/l .Concentrations in Cachi were 3.1 μg/l .Only healthy unrelated adults who gave written informed consent were included in the study."The study was approved by the University of East Anglia Research Ethics Committee, the Ministry of Health of the Province of Salta and the University of Cambridge's Human Biology Research Ethics Committee.In total, 726,090 SNPs passed a genotype call rate of >98% and were included in downstream analyses .Two tests for positive selection were employed to analyze genome-wide signatures of arsenic adaptation.The pairwise fixation index was used as a measure of population differentiation between Collas and Wichí, and between Calchaquí and Wichí using the programme GENEPOP .We defined genomic windows of 200 kb and used maximal FST values to rank them.Only the top 1% was considered for analyses.Because the direction of the pairwise FST signatures cannot be determined, we also used the population branch statistic to pinpoint allele differentiation to the population of interest .PBS is based on pairwise FST of three populations.Collas and Calchaquíes were each compared to Wichí and Eskimos .Eskimos were chosen as the closest non-American outgroup genotyped on the same genotyping platform as Collas, Calchaquíes and Wichí.They originated from Novoe Chaplino, Chukotka Autonomous Okrug in Northeast Siberia .PBS was calculated following Yi et al. using a modified approach from Pickrell et al. for 100 kb windows ranked by maximum PBS values .Regional analysis of linkage disequilibrium was carried out with HaploView 4.2 .As the first step in functional interpretation of the results of selection scanning, we compiled a list of genes known to be involved in arsenic metabolism.We included genes from three different sources: from the Gene Ontology database AmiGO we extracted genes that matched the search keyword ‘arsen’ to include metabolites of arsenic such as arsenate and arsenite ; from the gene information database GeneCards we extracted genes associated with any compound containing the keyword ‘arsen’; additional methyltransferases were extracted from literature .The final candidate gene list consisted of 35 unique genes.The selection test results were subsequently screened for these 35 candidate genes of arsenic metabolism.Allele frequency differences between the three populations were assessed with One Way Analysis of Variance implemented in the Statistical Package for Social Sciences, version 20.We conducted whole genome scans in Collas and Calchaquíes to identify genetic loci that showed higher than genome-wide average allelic differences between populations.These scans highlighted the arsenic methyltransferase gene as being highly differentiated in the Colla population.The gene was among the top 15 windows in PBS of Colla highlanders and among the top 40 windows of pairwise FST between Collas and Wichí.The pairwise FST signal was exclusively driven by two SNPs, one within the AS3MT gene and another one 1 kb upstream of AS3MT.Specific variants of these alleles have been associated with beneficial arsenic metabolism .The C allele of the T/C SNP rs1046778 was more frequent in Collas than in Wichí.The G allele of the G/A SNP rs7085104 was also prevalent in Collas.This is consistent with previously reported frequencies for these alleles , which have been associated with overall decreased expression of AS3MT and lower excreted MMA levels .Engström et al. showed a 175% increase of AS3MT expression in homozygous carriers of the T allele at the rs1046778 locus compared to homozygotes of the C allele.Overall, 92% of Collas were at least heterozygous for the C and G allele on the same chromosomal strand corresponding to both functionally advantageous alleles.The percentage of homozygotes for both beneficial alleles is decreased in individuals from the Calchaquí valley, but not significantly.However, allele frequencies in both Collas and Calchaquíes differed significantly from Wichí.A recent study using a dataset with greater SNP density, however, could not identify these two previously highlighted SNPs among the top 20 SNPs associated with MMA or DMA concentrations in 124 women from San Antonio de los Cobres .In agreement with our FST results, PBS comparisons of Collas, Wichí, and Eskimos highlighted a window containing AS3MT and two neighbouring genes, CNNM2 and WBP1L.However, this test identified a different set of SNPs than FST in the surrounding region of AS3MT.The SNP nearest to the gene region identified by PBS was located 15 kb downstream of AS3MT within CNNM2 and ranked 11th.Other high-ranking SNPs included rs17115100, within CYP17A1, 38 kb upstream of AS3MT, and rs11191514 within CNNM2, 112 kb downstream.The recent study by Schlebusch and colleagues associated rs17115100 and rs11191514 with percentage of MMA in urine and rs17115100 also with percentage of DMA in urine .The allele frequency FST based selection test used by these authors also highlighted AS3MT as top candidate of selection in the Colla population with Peruvians and Colombians as control populations.We previously reported haplotype based selection tests in Collas but AS3MT was not among the top 1% haplotypes.However, a regional haplotype analysis 1 Mb up and downstream of AS3MT identified a haplotype block of 499 kb containing AS3MT.We repeated the PBS test using a neighbouring population to the Collas, the Calchaquíes, comparing it to Wichí and Eskimos.This test identified the same upstream SNP, albeit the SNP containing window ranked much lower.While the top 1% FST results from Calchaquíes lacked AS3MT, it contained another gene from the candidate gene list, the cyclin-dependent kinase inhibitor 1A gene.This kinase inhibitor is a modulator of the cell cycle and was inferred by orthologs to respond to an arsenic-containing substance.AS3MT was the only of the 35 arsenic candidate genes showing a signature of selection with two selection tests in the same population.High concentrations of arsenic in drinking water represent a strong environmental stressor, driving significant adaptive change in the highland populations of the Argentinean Puna.In this study, AS3MT was identified by our genome-wide scans as the main outcome of positive selection.Alleles within or nearby this gene are highly differentiated and appear within the top 1% of ca. 13,000 windows across the genome.AS3MT had not been identified previously the top 1% of two haplotype based tests in Collas .However, the minimum SNP density for iHS in a 200 kb window was not reached in the respective window containing the gene; therefore, no iHS test statistic could be calculated.XP-EHH neither highlighted the respective window as a particular long high frequency haplotype .Thus, the selection signature of AS3MT was not identified by our previous haplotype based tests but only by allele frequency based tests.Though a similar study also failed to identify a strong selection signal with iHS, it reported the average iHS values in a 1 Mb window around AS3MT to be among the top 3%.In both studies, allele frequency based tests lead to more conclusive results, suggesting selection from standing variation in the ancestral population prior to the exposure to high arsenic concentrations.The alleles identified by our present study have been functionally evaluated and associated with reduced MMA concentrations in the Colla population of San Antonio de los Cobres .High concentrations of MMA are associated with arsenic related diseases ; thus, the metabolism of Argentine Puna inhabitants seems fine-tuned to reduce toxic MMA .Only AS3MT could be highlighted from the arsenic candidate gene list by two selection tests using a genome-wide genotype approach.An alternative arsenic methyltransferase N6AMT1, which was also associated with lower MMA in Collas , did not reach genome-wide significance.The findings of our study are therefore well in agreement with a previous recent report suggesting selection pressure from arsenic water in the Colla population, albeit analyzing different individuals, using distinct control populations and different FST based selection tests.Alleles both within and around AS3MT appear to be the target of strong positive selection.The SNPs around AS3MT could be in linkage with a regulatory or functional variant or could itself influence AS3MT expression.An analysis of the region revealed a haplotype block of approximately 499 kb around the gene region, thus suggesting selection of surrounding SNPs.Schlebusch et al. also highlighted selection signatures outside the coding region of the AS3MT gene.Whole genome scans have the potential to reveal more distantly located loci with functional relevance, which may be overlooked by targeted resequencing of specific gene regions.Besides reporting strong signatures around AS3MT, we also highlighted adjacent genes, such as CMMN2 or CYP17A1, and cannot unequivocally exclude that these may also contribute in particular to the PBS selection signal.However, considering the high FST scores within AS3MT, the functional relevance of this gene in the arsenic metabolism and association of overrepresented alleles in Collas with its expression , AS3MT is a likely candidate of selection.Nevertheless functional in vitro and in vivo studies of alleles are necessary for a more conclusive interpretation.In this regard, it is worth noting that the neighboring genes CNNM2 and WPB1L, have been shown to be differentially methylated in the Colla population .Since methylation reduces gene expression, a decreased level of the arsenic methyltransferase in peripheral blood was observed .The reduced expression of this enzyme is associated with lower levels of MMA and thus most likely beneficial in an environment with elevated arsenic concentrations."It is interesting to note, that the FST values for the two highlighted alleles within 1 kb upstream of the AS3MT gene were 10 fold higher than the gene's average FST of 0.053 calculated in another study, which compared Collas to indigenous Peruvians .This underlines the extreme allele differentiation of two functionally associated SNPs compared to the complete gene region.Significant differences in the allele frequency of AS3MT were also observed between Calchaquíes, Wichí and Eskimos, even though arsenic levels in ground water in the Calchaquí region are lower than those in the Puna .The selection signature of AS3MT ranks lower in Calchaquíes than in Collas albeit still among the top 1%, thus, implying either a reduced selection pressure in the Calchaquí population or gene flow from Collas .Calchaquíes also show a selection signature around CDKN1A, as indicated by pairwise FST, although this signature is less strong than that of AS3MT in Collas.The functional significance of this cell cycle regulator for arsenic metabolism remains to be clarified.In summary, our study confirms previous claims that positive selection has shaped allele frequencies of AS3MT to allow adaptation to the extremely toxic element arsenic .We show signatures of positive selection driving allele frequencies in Collas and, to a smaller degree, in the neighboring Calchaquí population.Selected alleles have enabled these populations to thrive for thousands of years despite their constant exposure to high levels of arsenic in drinking water.The toxicant arsenic was shown to shape allele frequencies of the main arsenic methyltransferase in Argentinean Collas and Calchaquíes.This study confirms recent findings highlighting the strong selection pressure of the environmental carcinogen arsenic at a genome-wide level.This suggests that natural selection has given carriers of beneficial alleles higher reproductive success to thrive despite the daily consumption of high levels of arsenic.The authors declare that there are no conflicts of interests.This work was supported by European Research Council Starting Investigator, a starting investigator grant from the University of East Anglia, a Young Explorers Grant from the National Geographic Society and a Sir Henry Wellcome Postdoctoral Fellowship.Publication and open access costs were covered by the University of Winchester.The funding bodies had no influence on the study design or analysis, data interpretation or article preparation.
Arsenic is a carcinogen associated with skin lesions and cardiovascular diseases. The Colla population from the Puna region in Northwest Argentinean is exposed to levels of arsenic in drinking water exceeding the recommended maximum by a factor of 20. Yet, they thrive in this challenging environment since thousands of years and therefore we hypothesize strong selection signatures in genes involved in arsenic metabolism. We analyzed genome-wide genotype data for 730,000 loci in 25 Collas, considering 24 individuals of the neighbouring Calchaquíes and 24 Wichí from the Gran Chaco region in the Argentine province of Salta as control groups. We identified a strong signal of positive selection in the main arsenic methyltransferase AS3MT gene, which has been previously associated with lower concentrations of the most toxic product of arsenic metabolism monomethylarsonic acid. This study confirms recent studies reporting selection signals in the AS3MT gene albeit using different samples, tests and control populations.
415
Rebound effects in agricultural land and soil management: Review and analytical framework
Improvements in resource-use efficiency are central to decoupling economic growth from natural resource consumption.For agriculture, this decoupling is essential, given that global drivers such as dietary shifts toward a higher share of animal based proteins and population growth indicate an increasing demand for agricultural goods over the next three decades.Expansion of global agricultural area or intensification of production through much higher use of inputs is unsuited to balance this increase, since the agricultural sector already significantly contributes to the exceeding of planetary boundaries for biodiversity loss and biogeochemical flows of nitrogen and phosphorus."However, increasing the resource-use efficiency in agriculture could create a win-win situation by enhancing economic performance and alleviating pressures on the environment.Efficiency targets have therefore been formulated across global and national policy levels.At the global level, target 2 of Sustainable Development Goal 12 “Sustainable Consumption and Production” seeks to achieve sustainable management and efficient use of natural resources by 2030.At the European level, the Roadmap to a Resource Efficient Europe sets a 20% reduction of resource inputs within the food chain as a milestone for 2020, while in the EU Rural Development Act, increasing the efficiency of agricultural production is a priority, and water use efficiency, energy efficiency and greenhouse gas emissions are explicitly referred to.One example at the national level is the German Policy Strategy on Bioeconomy, which seeks to achieve sustainable intensification of agricultural production by increasing productivity while protecting natural resources and minimising greenhouse gas emissions.While efficiency increases are often considered a silver bullet, associated resource savings usually turn out to be smaller than what the improvements in technical efficiency under ceteris paribus assumptions would suggest.Increasing the efficiency of a production process affects the producer-consumer system and can trigger adaptive behaviour that offsets part or all of the initial resource savings.In extreme cases, these so-called rebound effects can even result in a net increase in resource consumption.“Rebound effects” is a collective term that encompasses several economic as well as social-psychological adaptation mechanisms which occur in the wake of increases in resource-use efficiency and which affect the total consumption of that resource.It is important to note that only behavioural changes caused by efficiency improvement are considered rebound effects, whereas other changes, such as changes due to general economic growth, may also increase resource use but are not rebound effects.The concept of rebound effects was originally developed in the context of energy efficiency but has since been expanded to resource-use efficiency in general.The first description of rebound effects dates back to the mid-1800s and the British economist W.S. Jevons, who postulated that higher fuel efficiencies would always result in higher, instead of lower resource exploitation.This theory was reiterated in the 1980s by economists Khazzoom and Brookes and termed the Khazzoom–Brookes postulate.While current research indicates that in most cases, rebound effects are not large enough to result in a net increase in resource use, even partial offsetting of savings has implications for resource-use planning and the assessment of potential benefits from innovations, such as novel technologies and practices in agricultural management.Ex-ante impact assessment is a means to analyse positive and negative as well as intended and unintended impacts of decision options, such as implementing more efficient technologies and practices, against targeted benchmarks.An impressive wealth of tools and methods for ex-ante impact assessment has been developed over the last decade, particularly in the field of agriculture.However, despite the increasing sophistication of such tools for agricultural practice and policy, consideration of rebound effects is still rare due to a lack of information on causal relationships determining their occurrence and size.By describing and assessing these relationships, this paper facilitates the consideration of rebound effects in future assessments, which will in turn contribute to developing sustainability policies that promote technology development while mitigating its adverse effects.This paper focuses on rebound effects in agricultural land and soil management.This term denotes arable and grassland management but excludes livestock management.A distinction is made between land and soil, to account for two different concepts in the debate on resource-use efficiency.Land represents the terrestrial solid part of the earth that is not permanently under water.More efficient land use could reduce the expansion of agricultural areas into natural habitats or even spare land for conservation and biodiversity purposes.Understanding rebound effects in land management is therefore central to advancing the debate on land sparing vs. land sharing, where the latter represents a concept of multifunctional land use that is less intense but supports nature conservation purposes alongside agricultural production.Soil, on the other hand, is the uppermost zone of the land surface, in which mineral particles, organic matter, water, air and living organisms interact.In soils, biogeochemical turnover processes enable biomass growth, and it is the optimisation of these natural processes that is targeted by efficiency-improving innovations under the paradigms of sustainable intensification and ecological intensification.Rebound effects influence how soil management alternatives or novel crop varieties may reduce energy consumption and result in reduced fertilizer and pesticide application or lower greenhouse gas emissions.With regard to both land and soil management, rebound effects are highly relevant for evaluating the role that yield improvements may play in satisfying a growing global demand for agricultural commodities.Although the number of publications analysing rebound effects in agricultural land and soil management is growing, most of them are limited to single resources and specific rebound mechanisms.To our knowledge, there are no studies yet providing an overview of rebound effects across the main resources used in agriculture or providing guidelines for assessing potential rebound effects from agricultural innovations.This paper contributes to closing this gap by:Reviewing the state of knowledge on rebound effects connected to efficiency improvements in the use of the main resources of agricultural land and soil management.Developing a framework that facilitates the assessment of rebound effects from agricultural management innovations."Testing the framework's application by assessing emerging innovations in agricultural land and soil management for potential economic rebound effects.In the following section, we describe how the literature review was conducted, how the framework for assessing rebound effects was created and provide information on the test case.In section 3, we present the framework and discuss the findings from the literature review and from the assessment of our test case.In section 4, we draw final conclusions.Distinguishing between rebound effects and changes in resource use caused by other factors, such as increases in GDP or changes in societal preferences is challenging.Especially in the analysis of long time series or complex policy measures, quantifying rebound effects requires an assessment of the various rebound mechanisms discussed in section 3.1 of this article, and assumptions on how resource use would have developed in the absence of efficiency improvements.Because the concept of rebound effects is resource specific, efficiency increases in the use of one resource that result in increased consumption of another resource are not considered rebound effects.While these spill-over effects are highly relevant for efficiency improvements in agriculture, their analysis is beyond the scope of this paper.The main physical resources used in agricultural land and soil management are land and soil, water, nutrients, pesticides, and energy.Additionally, international goals for efficiency improvements in agriculture include aspects of greenhouse gas emission.These emissions can be treated like another resource category, especially since the goals of the Paris Agreement constrain the total amount of greenhouse gases that should be emitted into the atmosphere.To assess the state of knowledge on rebound effects connected to these resources, publications were reviewed using a keyword-based search in the Web of Science and Scopus.The search terms rebound effect* or Jevon* were combined with each of the terms: agricultur*, farm*, land, soil, irrigation, phosphorus, nitrogen, fertilizer and pesticide within the title, abstract or keywords.Excluding non-English publications, we identified 33 articles relevant to the objectives of this study.An overview of the results and the resources addressed by the individual papers is provided in section 3.To facilitate a structured analysis, we created a framework of economic and social-psychological rebound effects.This framework builds on the commonly used classification of rebound effect types into direct, indirect and economy wide effects, the differentiation between economic and social-psychological causes and the idea to combine rebound effect types and causes into a matrix like structure.In our framework, we further distinguish between producer and consumer related direct and indirect rebound effects, and we list factors that influence effect sizes of the different rebound mechanisms."While the framework was used to structure the findings of the literature review, information from the review was, on the other hand, used to verify and refine the framework's list of factors influencing effect sizes.Because the reviewed literature focusses mainly on economic rebound effects and provides little information on social-psychological mechanisms, which are consumer based and therefore predominately food related, we additionally searched for publications that used the term food in combination with either rebound effect* or Jevon* in title, abstract or keywords.In total, we considered 7 publications that highlight the consumer perspective and socio-economic causes of rebound effects.The framework presented in this article is a tool to aid the assessment of potential rebound effects.While it was created within the context of innovations in agricultural land and soil management, it can easily be transferred to other fields, due to its generic nature.As a test case, we assess innovations in soil management for potential rebound effects.We draw on data from a foresight study focussing on future soil management in Germany, as an example for countries with a temperate climate, industrialized agriculture and low yield gaps.The study is based on a literature review and expert interviews, and it groups emerging technologies and practices into soil-related management categories.In the test case, we address those categories for which there is sufficient evidence to evaluate efficiency improvements.The assessment of the test case is restricted to economic rebound effects, since most consumers are unaware of the individual technologies and practices involved in soil management.Innovations in this area are therefore unlikely to affect consumers’ perceptions of end products, which is a prerequisite of social-psychological rebound effects.Despite the restriction to economic rebound effects, the strength of the test case study is that it covers the whole range of innovative cropping activities and allows the identification of multiple areas of potential rebound effects.Causes of rebound effects can be categorized as economic or social-psychological.From an economic point of view, more efficient resource use contributes to lower production costs and affects the price of goods and services.Under the neoclassical assumption of rational economic agents, this leads to increased consumption and production, thereby offsetting a portion of the initial resource savings.On a macroeconomic level, efficiency increases may promote overall economic growth associated with an increase in resource consumption, either through price effects or by enabling technological innovations.de Haan et al. also highlight the relevance of social-psychological rebound effects and note that purely economic, rational behaviour is not a valid assumption for private consumers.For this group in particular, social-psychological factors can either create rebound effects or lead to additional resource savings.Policies aimed at increasing resource-use efficiency can affect economic boundary conditions and influence social-psychological factors.Therefore, they have the potential to promote or limit rebound effects in multiple ways.Rebound effect types can be divided into direct effects, indirect effects and economy-wide effects.Rebound effect causes and types constitute the building blocks of our rebound effect framework.Starting from an improvement of efficiency that reduces the demand for a specific resource, the adaptations of producers and consumers can create additional demand for that resource.Economic and social-psychological causes are distinguished and differentiated based on rebound mechanisms.While rebound effects will occur in most cases, they may be too small to be of relevance for a specific assessment.What effect sizes are considered relevant must be determined by the user of the framework.Within the framework, the relevance of different rebound mechanisms for a specific case is assessed by answering a set of questions.Analogous to a flow chart, arrows with “yes” and “no” point to likely consequences.Where a rebound mechanism is considered relevant, the arrow marked “yes” points to external factors that, together with the degree to which consumers’ perception or production cost is affected, determine the size of the respective rebound effect.For example, an efficiency increase may lower production costs and motivate producers to expand.The degree to which such an expansion is likely is determined by the degree of the cost reduction, by the degree of market saturation, and by the degree to which producers have access to additional production factors.From an economic point of view, direct rebound effects comprise income effects and substitution effects.Under the income effect, higher efficiencies mean lower production costs, which may motivate producers to expand if additional factors of production are available.For example, if the introduction of more efficient irrigation technologies makes irrigated agriculture more profitable, farmers may opt to expand their area of irrigated farmland.To do so, however, they require access to land that is allowed to be irrigated, additional labour to manage this land and the financial means to pay for the investment.Where lower costs result in lower prices, consumers are likely to react with increased consumption of the more efficient product, depending on own-price demand elasticity.Additionally, producers and consumers may opt to use the more efficient production process to substitute for other types of production.The degree to which this occurs depends on the elasticity of substitution.From a social-psychological point of view, services produced by processes that consume fewer resources are perceived as more positive than those produced conventionally.This is especially true if those services are labelled as socially or environmentally friendly.Where consumers restrict their consumption due to awareness of resource-use implications, they may become less hesitant to consume services from more efficient processes, thereby creating additional demand.Indirect rebound effects occur if efficiency increases in a process result in an increasing demand for other processes that consume the same resource.Where higher efficiency translates into financial gains for producers and/or consumers, part or all of this gain is usually spent on additional consumption of goods and services, which may also cause additional consumption of the resource that is saved.It is clear that direct and indirect rebound effects are negatively correlated: the more financial resources are spent on the original, more efficient process, the less can be spent on other processes.Chitnis et al. noted that indirect rebound effects are not only caused by improvements in the efficiency of production but can also be caused by voluntary reductions in the consumption of specific goods, if the monetary savings are spent elsewhere.From a social-psychological point of view, many consumers implicitly evaluate their own behaviour and apply a budget to their resource consumption.Being more environmentally friendly in one respect may therefore lead to less self-restraint in other areas.Kaklamanou et al. investigated this in an online survey in which participants were required to agree or disagree with statements claiming that engaging in specific sustainable practices could compensate for unsustainable behaviour in another areas.They found only a low rate of agreement, ranging from 4%–16%, but caution that this figure is a conservative estimate, as concerns over the social desirability of answers may have led some respondents to reject statements.Economy-wide rebound effects occur if an entire economy is affected through changes in societal values, increases in wealth, production or consumption, or the introduction of technological innovations made possible by more efficient processes.For example, the introduction of chemical fertilizer dramatically increased the efficiency of agricultural production in terms of yield per hectare and reduced the need for agricultural land.Furthermore, it has affected economies by lowering food prices and freeing up consumers’ resources to be spent on other goods and services.The resulting economic growth and increased wealth have affected dietary preferences and led to an increase in the consumption of animal-based proteins.Because these diets require more land per nutritional value than plant-based diets, a rebound effect has occurred that has offset part of the original resource savings.From a social-psychological point of view, technological progress towards more efficient processes may give consumers a sense of optimism that problems are being taken care of by experts, thereby reducing their perceived responsibility for sustainable consumption.However, the opposite is also possible, and sustainable development may create a new paradigm of more sustainable behaviour.Negative rebound effects such as this are referred to by Wei as super-conservation.The keyword-based literature search identified 33 journal articles addressing rebound effects connected to resource use in agriculture.Table 1 lists the publications and the resources that they address.All but two studies focused on economic rebound effects.In total, 12 articles addressed rebound effects related to the resource land; 14 examined the use of irrigation water; three addressed energy use; and four addressed greenhouse gas emissions.One study each addressed fertilizer use and pesticide application.Ten of the articles were published in the year in which this study was conducted, while 28 were published within the last 5 years.The results show that rebound effects in agriculture are a new and rapidly growing research topic.This is not to say that rebound effects have not been reported in earlier studies, but rather, that researchers have only started very recently to address pertinent findings as rebound effects and used the term in the title, abstract or keywords.Even among the identified studies, the terminology is not used consistently.Some authors investigate the occurrence of Jevons’ paradox without referencing the term “rebound effect”, while others treat the two terms as synonyms.We present and discuss the findings of the literature review, ordered by resource category, together with results from our test case.The latter includes expected efficiency improvements from emerging technologies and practices, and potential economic rebound effects identified through the use of our framework.To demonstrate the application of the framework, we apply all steps in detail for the resource land, while for the other resources, we only note the most relevant effects to avoid redundancy.The question of how far improvements in productivity result in reduced total land use is the subject of a longstanding scientific debate.At one extreme of the argument, the so-called Borlaug hypothesis states that productivity gains are the key to limiting the expansion of agricultural land into natural ecosystems."At the other extreme, proponents of Jevons' paradox assert that productivity gains instead motivate and promote agricultural expansion.The two effects seem to exist in parallel, with local circumstances determining the prevalence of one or the other."Relevant factors promoting land sparing include the quality of environmental governance as well as formal recognition of indigenous peoples' and local communities' rights to forests. "However, in a test case in Argentina, Ceddia and Zepharovich found that the introduction of a forest protection law and land titling to indigenous peoples instead promoted the occurrence of Jevons' paradox from agricultural intensification.As possible reasons for this result, the authors named a lack of effectiveness of the law and deforestation undertaken to prevent land from being titled to indigenous peoples.Factors considered to contribute to the occurrence of Jevons’ paradox are a high quality of governance, as measured by World Bank indicators for corruption control, rule of law, and voice and accountability as well as low yield levels and potential availability of new farmland.Barbier and Burgess found a negative correlation between agricultural yields and the deforestation rate in tropical countries between 1980 and 1985.However, Ewers et al. showed in a global study on yields and land use between 1979 and 1999 that land sparing occurred only in some cases.For developed countries, they found no evidence that increases in agricultural productivity resulted in lower per capita cropland demand.In a meta-study, Villoria et al. showed that at the global level, most modelling results indicate a land-sparing effect, while Jevons’ paradox may occur regionally.Green at al. analysed the question of land sparing vs. land sharing with a focus on biodiversity preservation and presented a model for investigating the effects of agricultural production intensity.Assuming a fixed demand or production target, they found that if the relationship between yields and population densities is best described by a concave function, as implied by empirical data from developing countries, the optimal strategy for species conservation lies in intensive production in some places and no production in other places.However, these authors indicated that the assumption of a fixed demand was a limitation of their study and that increased productivity could lead to higher production targets.This point was taken up by Desquilbet et al., who investigated the same data assuming market equilibrium instead of exogenous production levels.They found that intensive farming results in increased demand and that due to rebound effects, extensive farming is more beneficial to biodiversity conservation, unless the degree of convexity between biodiversity and yield is high.Lambin and Meyfroidt discussed the implications of direct rebound effects from raising agricultural productivity in more detail.They pointed to the income effect, i.e., that more efficient production is likely to be more profitable and could therefore motivate expansion.This expansion would depend in the short term on the price elasticity of demand.Although demand for staple crops is considered mostly inelastic, lower prices could increase the demand for biofuels, meat and other luxury crops.Non-staple crops are considered to be price and income elastic, and direct economic rebound effects are therefore to be expected where expansion is possible through available production factors such as labour, capital and land.Bedoya-Perales et al. reported rebound effects related to the expansion of quinoa cultivation in Ecuador, where increased productivity was found to result in an increase, rather than a decrease of cultivated land.Latawiec et al. discussed the potential of intensification of pasture-based agriculture in Brazil to reduce deforestation and achieve land sparing.While intensification could meet targets for production increases with a lower land demand, they stated that higher productivity could also motivate agricultural expansion and that more profitable agriculture would make nature conservation more expensive, due to higher opportunity costs.To mitigate rebound effects, these authors see a need for economic incentives for farmers, either through policies such as taxes, subsidies or enforcement of existing regulations, or via market-side initiatives, such as labelling products according to their environmental footprint.Meyfroidt named land zoning and value chain interventions such as certification schemes as options for mitigating rebound effects that would otherwise lead to an increase in the use of land for agriculture through farmland expansion.Regarding our test case of emerging technologies and practices of soil management for Germany, we expect future increases in land-use efficiency from innovations in the following categories of practices and technologies:Improved crop varieties: crop breeding is an on-going process, and improvements in productivity and stress tolerance are expected.Intercropping may be practiced on a relevant share of cropland once better-suited technologies become available.Meta-studies show that intercropping can increase yields in terms of land equivalent ratios, i.e., achieving higher yields than if all crops had been grown as single crops on an area of land equivalent to their share in the intercrop mix.Intercropping also improves productivity by increasing yield stability.An increase in the irrigated agricultural area is expected, partly motivated by concerns about climate change, which will increase the yield per hectare.Agroforestry may be practiced more in the future, mainly due to an increasing demand for lignocellulosic material for industry.Similar to intercropping, the mixture of different species in well-designed agroforestry systems can increase land equivalent ratios under temperate conditions.To assess potential economic rebound effects, we apply the right-hand side of the framework presented in Fig. 1.,focussing on effects at the national scale.The first-level question “Does efficiency improvement lower production costs by a relevant degree?,can be affirmed for all listed practices except agroforestry.In that case, high investment costs with long payback periods are likely to offset cost reductions, and no relevant economic rebound effects are therefore expected.For the remaining practices, the following appraisal has been made:“Direct producer effect: Is production likely to be expanded?,Due to a growing global demand for agricultural products, we assume that yield increases will be fully offset by higher production and not result in an abandonment of agricultural area in Germany.This constitutes a rebound effect of 100% at the national level."If profitability is strongly increased by the innovation, even Jevons' paradox is possible if areas under pasture are converted into additional cropland.Such an effect has been observed in Germany in connection with an intensification of milk production, which motivated farmers to expand the area used for fodder maize.However, regulations at the German and European levels currently restrict options for such conversion, thereby limiting expansion through regulating the availability of land as an additional production factor.“Direct consumer effect: Are product prices for consumers significantly reduced?,In Germany, price development for agricultural commodities depends on multiple factors, among which land productivity is only one.Because of this, we do not expect the identified innovations to result in a drop in prices for German consumers and consequently do not expect them to increase consumer demand.“Direct effect: Can the more efficient process be used to substitute other processes?,Substitution options for agricultural commodities are limited and commodity specific.For food crops, the options for substitution are much more limited than for animal feed and non-food crops.In the latter category, biomass is expected to increasingly substitute fossil resources in products such as plastics.Depending on the development of markets for agricultural resources for material use, increased productivity could become a driver of this substitution, which would constitute a rebound effect.Generally, this novel demand for agricultural products illustrates how an increase in available land due to improved land-use efficiency could rapidly be utilized for alternative types of production.“Indirect effect: Are cost savings likely to be spent elsewhere/re-invested?,In the near future, input prices are expected to rise more strongly than product prices, which is likely to offset part or all of the achieved cost savings.We therefore expect only very moderate indirect rebound effects.“Economy-wide effect: Will economic growth at the national scale be affected?,Within the test case, we consider relatively small changes within an economic sector that generates only 0.7% of German GDP.Therefore, we assume that economic growth at the national scale will not be affected to a relevant degree.In conclusion, several potential innovations in agricultural soil management were identified that would increase land-use efficiency and would likely cause economic rebound effects.These innovations are improved crop varieties, intercropping and an increase in the irrigated agricultural area.Using the framework, rebound effects at national scale were found to be most likely to occur in the form of direct producer effects through expansion and, potentially, substitution.Although effect sizes close to 100% are expected for the resource land at the German national scale, occurrences of Jevons’ paradox are expected to be rare due to the institutional setting in Germany.For the resource irrigation water, particularly where surface water is used, it is necessary to distinguish between water use and water consumption.Only a portion of the water used in irrigation systems is taken up and consumed by plants, while the remainder is considered to be wasted.However, water wasted by inefficient irrigation systems may become re-available by seeping into aquifers or re-entering rivers in the form of return-flow downstream from the point of abstraction.More efficient irrigation systems reduce the share of water that is wasted but may also increase the amount of water that is available to plants, resulting in higher consumption.This can result in a hydrological paradox, where water use declines due to higher efficiencies, but water consumption increases, exacerbating water shortages.Additionally, rebound effects appear to be common, with several studies reporting cases in which rebound effects were greater than 100%, and efficiency improvements resulted in higher, instead of lower, water consumption.More efficient irrigation systems are likely to induce direct rebound effects because increased water productivity constitutes an economic incentive for farmers to expand their area of irrigated land and to substitute non-irrigated crops for irrigated crops, which result in higher revenues.This substitution effect may even be exacerbated by the need to cover investment costs for the improved irrigation systems.While numerous studies have investigated whether efficiency improvements result in a reduction or increase in total water use, very few studies have attempted to quantify effect sizes.One exception is an article by Song et al., who investigated the agricultural water rebound effect in China.Using macro-scale economic indicators and a statistical model to separate increases in water use due to technical progress from those induced by increases in other inputs, these authors calculated an average direct rebound effect of 61.5% at the national level in China between 1998 and 2014.They also identified rebound effect sizes greater than 100% at the national level for individual years.At the regional level, they found an average rebound effect greater than 100% in some provinces across the entire time frame.Without accompanying policy measures, efficiency improvements in agricultural irrigation are likely to lead to large rebound effects and possibly Jevons’ paradox.Accordingly, the FAO review reported by Perry et al. concluded that “introducing hi-tech irrigation in the absence of controls on water allocations will usually make the situation worse: consumption per unit area increases, the area irrigated increases, and farmers will tend to pump more water from ever-deeper sources.,Effect sizes, however, depend on preconditions that can be influenced by policy making.Expansion generally requires the availability of additional production factors, and where additional land to be irrigated is unavailable due to legal restrictions, this type of rebound effect will not occur.Accordingly, using a microeconomic model where neither an increase in irrigated area nor a shift to crops with a higher water demand is possible, Berbel et al. found that efficiency improvements result in lower water use and negligible increases in water consumption when irrigation prior to the improvement already achieved the full yield potential.Where deficit irrigation was practiced before, water use is reduced, but water consumption is increased.To reduce rebound effects, policy measures limiting the total size of irrigated area, reducing the total amount of water rights after efficiency improvement and reassigning a portion of the achieved water savings towards environmental goals have been suggested.However, Loch and Adamson discuss rebound effects and problems associated with such reassignment for an Australian case in which 50% of the achieved water savings are required to be used for environmental goals."In their “Blueprint to Safeguard Europe's Water Resources” the European Commission recommends adequate water pricing as a means to avoid possible rebound effects from efficiency improvements in the water sector.Likewise, Song et al. suggest the need for water prices in China to reflect the real costs.However, for agriculture, Berbel et al. consider increasing water prices to avoid rebound effects to have only a very limited effect.This assessment is partly shared by Dumont et al. who state that agricultural water use is price inelastic where surface water is used, but price elastic where groundwater is used.Considering the test case, novel irrigation technologies are expected to yield higher water use efficiencies.Research on water use efficiency has increased exponentially in recent years, with 20 published articles in 1988, 121 in 2003 and 618 in 2016.Areas of technological improvement include combining remote sensing and soil sensors with modelling approaches for better demand-specific irrigation.While rebound effects from improving the existing irrigation infrastructure in Germany are not expected to be strong, due to the low extent of irrigation at the national level, technological progress may reduce the cost of irrigation and increase associated water productivity.Both factors are likely to promote uptake of irrigation and increase the use of irrigation water in Germany, which would constitute a rebound effect.For rebound effects connected to nutrient management, we found only one study on phosphorus.However, the authors addressed rebound effects only in very general terms and did not provide information on rebound mechanisms or effect sizes.Based on our framework and the particularities of nutrient management in agriculture, we consider direct rebound effects from improved nutrient efficiencies to be likely because farmers seek to apply the amount of fertilizer that, within the bounds of rules and regulations, achieves the highest contribution margin.Due to the law of diminishing returns from inputs to production, this amount is generally lower than the amount that would achieve the highest yields.When the nutrient-use efficiency is increased, such as through a new crop variety that is able to achieve the same yield with a lower fertilizer input, the economic optimum shifts and farmers adapt their nutrient management accordingly.This is likely to result in a situation where fertilizer inputs are reduced, but a portion of the potential resource savings are offset to achieve higher yields.However, farmers may deviate from this theoretical approach due to behavioural norms or due to imperfect knowledge, i.e. “bounded rationality”.Additionally, “safety thinking” can lead to fertilizer inputs that are higher than the economic optimum.For emerging technologies and practices in Germany, we expect increases in nutrient-use efficiency from the following categories of technologies and practices:Cereal-legume intercrops decrease the need for nitrogen fertilization in relation to yield, due to improved use of soil and atmospheric nitrogen.Some studies also show more efficient use of phosphorus and micronutrients by intercrops.On-going developments in precision farming and decision support systems, including improved sensors, data fusion algorithms and translation into decision support, are likely to increase the efficiency of nutrient use.This is because even current precision technologies can often achieve high efficiency gains, and because technology in this field is rapidly developing.Similar to intercropping, diversifications in the sequence of crops, such as more diverse crop rotations involving more legumes and/or cover crops, can increase nutrient use efficiency.Such diversifications may emerge as a reaction to, for example, pesticide resistance or diversified consumer demand.New fertilizers from recycled nutrients may increase nutrient-use efficiency in the future at the national scale and possibly also at the farm level.Since significant reductions in fertilization and/or increases in yield to fertilizer ratios can be achieved with cereal-legume intercrops as well as through improved precision farming and decision support systems, we expect lower production costs and potential economic rebound effects for these innovations.In the case of more diverse crop rotations, economic benefits are unlikely because this practice entails the inclusion of less profitable crops into rotations.This assessment would change if a future diversification of demand were to improve the profitability of alternative crops, such as an increasing demand for perennial, lignocellulosic crops for material uses in the bio-industry.In the case of new fertilizers, we assume that due to more costly production processes, their price will be higher than that of conventional alternatives.Regarding rebound effect sizes, the following appraisals can be made: For intercropping with leguminous crops, direct producer-side rebound effects that would result in higher total nitrogen inputs are not expected.The particularities of the underlying biological processes limit options for farmers in this regard because high external nitrogen inputs reduce the effectiveness of legume nitrogen fixation and do not result in higher overall yields in terms of the total land equivalent ratio.For precision farming, strong direct producer-side rebound effects in the form of higher total fertilizer inputs are possible.In general, an important component of precision farming is calculating the spatially differentiated nutrient demand of plants.In cases of relatively low fertilizer intensities before the implementation of the technology, a higher production potential in some areas of a field with corresponding higher nutrient needs can overcompensate for the reduced fertilizer application in areas with a lower production potential.Accordingly, Jevons’ paradox has been observed in some earlier cases of precision farming.Although this possibility cannot be ruled out for future cases, overall improvements in the precision of fertilization could compensate for partially higher nutrient demands calculated through improved fertilization planning.As for nutrient management, the literature search identified only one study addressing rebound effects connected to pesticide application.Rotolo et al. describe how the cultivation of genetically modified, pest-resistant crops, originally intended to reduce external pesticide application, has resulted in the emergence of resistant pest species and consequently in a need for increased pesticide use.While these authors consider this to be an example of Jevons’ paradox, the offsetting of the technical efficiency gain is caused by the adaptive capability of nature and not by behavioural changes in human actors.It would, therefore, not fall under the definition of rebound effects used in this paper.However, this example highlights that agriculture operates at the interface between human and natural systems and that both human and non-human, natural actors will adapt to efficiency improvements.With regard to agriculture, it may therefore be advantageous to expand the concept of rebound effects in the future to also include natural adaptations.For our test case, we may see improvements in pesticide use efficiency within the following categories:Improved crop varieties are expected to increase pest resistance, reducing the need for pesticides.Well-designed intercropping systems and more diverse crop rotations reduce the need for pesticide application.Improved precision farming and decision support systems are likely to increase pesticide efficiency.Biotic inoculation can reduce the need for pesticide application.This new technology may become more economically viable with future developments.By increasing pest resistance, improved crop varieties and intercropping can generally lower production costs in terms of pesticide use, even though the overall effect on profitability is uncertain.In contrast, site-specific pesticide application, as a component of precision farming, is likely to largely reduce the total input of pesticides with the development of future technology, such as technology that identifies single plants and treats each plant individually with pesticides or mechanical measures.Efficiency gains from improved crop varieties, intercropping and precision farming/decision support systems could come with direct rebound effects if they motivate farmers to reduce tillage and substitute mechanical weed control with pesticide application.Finally, biotic inoculation is more expensive than the use of chemical pesticides and there is no indication that this will change in the future.Biotic inoculation may rather become a complementary plant protection strategy in cases of pesticide resistance.Pellegrini and Fernández investigated the relationship between agricultural energy use and energy efficiency during the spread of the green revolution."Based on time series from 1961 until 2014, they concluded that at the global level, higher energy efficiencies have not resulted in lower energy use, and that their findings fit the definition of Jevons' paradox. "Lin and Xie investigated factor substitution and rebound effects in China's food industry from the perspective of energy conservation.They found a direct energy rebound effect of 34% and evidence of substitution relationships between energy and other input factors, among which labour was the largest factor.For fuel and electricity consumption in general, Gillingham et al. considered the sum of microeconomic and macroeconomic rebound effects to not exceed 60%.In our test case, we saw potential for increases in energy use efficiency through reduced tillage practices.While it is uncertain whether the share of cropland under reduced tillage will increase in Germany, practices that are relatively new in Germany such as strip-till and controlled-traffic farming may provide new incentives for farmers.Reduced tillage methods require less fuel, to a degree that is relevant for farmers’ decision-making, because these methods require less drag force to move soil.Since there is no benefit in more frequent tillage when using improved methods, we only expect indirect rebound effects as a result of re-spending the money saved through innovative management.The effect size will largely depend on the energy intensity of the goods and services upon which the savings are re-spent.In a global study using the partial equilibrium model GLOBIOM, Valin et al. analysed scenarios that could satisfy projected food demand for the year 2050.For a technology pathway where additional production is achieved through higher productivity, these authors found strong demand-side rebound effects that reduced potential greenhouse gas savings by 50%.Benedetto et al. presented an analytical framework for rebound effects in the wine industry.Assuming a theoretical novel product with a lower carbon footprint, they illustrated how total greenhouse gas emissions may either increase or decrease depending on price changes and choices made by producers and consumers.Their example highlighted the complex interactions of market participants and the difficulty inherent in anticipating their reaction to efficiency improvements.Demand for agricultural commodities and the associated resource consumption is also affected by the proportion of food waste.Food-saving consumer behaviour, improved technical processes or novel crop varieties developed to increase product shelf-lives could lower this share in the future.For the effects of consumer-based reductions of food waste on greenhouse gas savings, rebound effects of 51%, ranging from 23% to 59% and ranging from 66% to 106% have been reported.The high effect sizes were a result of indirect rebound effects, with consumers re-spending money originally used for food which causes relatively low greenhouse gas emissions, in favour of greenhouse gas -intensive items such as fuels in transport or energy in housing.In a meta-analysis of life-cycle assessment studies, Clark and Tilman showed that shifting from diets containing high amounts of ruminant meat to diets based on fish, pork, poultry or vegetables could provide the same nutrition with much lower resource consumption.The impact categories they considered included land, greenhouse gas emissions and energy.However, these authors stated that these alternative diets would probably be cheaper and that the re-spending of the money thus saved could result in additional resource consumption.This would constitute an indirect rebound effect.To mitigate such effects, van der Werf and Salou proposed food labels based on greenhouse gas emissions per monetary unit.With this system, products of higher quality that are more expensive would be considered more positively than cheaper alternatives because they reduce the amount of money spent elsewhere and therefore also reduce the associated greenhouse gas emissions.However, in a life cycle study on the ecological benefits of dietary shifts in Europe, Tukker et al. found only negligible indirect rebound effects from the slightly lower costs of alternative diets.On the other hand, they considered economy-wide rebound effects to be likely, such as increased meat exports compensating for reduced domestic demand.In our test case, innovations for reducing greenhouse gas emissions are not investigated explicitly.However, where innovations result in land sparing or in a reduction in the use of mineral fertilizers, pesticides or fossil fuels, they also reduce greenhouse gas emissions.Nevertheless, an assessment of total effects requires that additional emissions caused by the innovations themselves be taken into account.In the case of precision farming and decision support systems, an increased demand for electricity, especially for storing data, may partly or fully offset greenhouse gas savings from reduced fertilizer application through emissions associated with electricity production."Fig. 3 gives an overview of the test case's application and major findings.Rebound effects in agriculture are an emerging research topic with a small, but rapidly growing evidence base.However, this novelty constitutes a challenge for systematic reviews, as older publications may not refer to relevant findings in terms of rebound effects or Jevons’ paradox.Where we identified such studies through cross references, we used them to complement the findings of our review.Additional research is desirable for such studies, especially regarding effects from more efficient fertilizer or pesticide use where evidence is lacking.Likewise, more research on social-psychological rebound effects is required, as these effects may play an important role with regard to consumer behaviour, and the knowledge base is still very limited.Quantification of rebound effects is another challenge, due to the interplay of multiple processes and variables, especially in ex-ante assessments."Even for energy efficiency, upon which most research on rebound effects has been focussed so far, published effect sizes differ strongly, with some studies indicating little or no effect, while others identify very high rates or even Jevons' paradox.However, the majority of reports in this field seem to agree that Jevons’ paradox constitutes an extreme case and that increases in efficiency generally result in a reduction of resource consumption.Likewise, direct rebound effects from increased energy efficiency are considered to be relatively small.In a meta-study, Gillingham et al. found values ranging from 5% to 40% for direct rebound effects from more efficient fuel and electricity use in developed countries, with most values falling between 5 and 25%.One reason for the divergence in published values for rebound effect sizes lies in the exclusive focus of many studies on selected rebound mechanisms.The framework presented in this paper can be used to highlight which rebound mechanisms are assessed and which are omitted in individual studies, thereby facilitating the interpretation of results.Additional reasons are the use of different time frames, which may lead to different rates of adaptation, and finally, the technical difficulties involved in assessing economy-wide effects.According to de Haan et al., process-based, bottom-up studies generally underestimate rebound sizes because they are unable to fully account for economy-wide effects.On the other hand, studies based on top-down evaluations of time series tend to overestimate effects, due to the difficulties of separating increased resource consumption caused by efficiency gains from increased consumption caused by overall economic growth and corresponding increases in consumer wealth.Accordingly, Gillingham et al. noted that while there are plenty of examples of how both resource-use efficiency and total resource consumption have increased since the industrial revolution, this co-occurrence is not proof of causality and therefore does not necessarily represent a rebound effect.Irrespective of the difficulties and uncertainties involved in determining rebound effect sizes, Maxwell et al. emphasise that the implicit assumption of zero rebound effects made in some studies and policies is not supported by scientific evidence.Several studies have found that consumer-based rebound effects are higher for low-income groups.One reason for this finding could be that low income is associated with a higher degree of unsatisfied demand.For energy use, Grabs attributed a higher rebound effect observed in lower income groups to the tendency to re-spend the saved expenditure on relatively energy-intensive expenditure categories.These findings also illustrate a social aspect of rebound effects: while such effects reduce resource savings, they are based on an increase in production and consumption and contribute to satisfying human demand, thereby potentially improving living conditions.This point needs to be considered in efforts to mitigate rebound effects, especially with regard to low-income countries, where efficiency gains and high rebound effects may contribute to meeting basic human needs.Our literature review revealed that rebound effects in the context of agricultural land and soil management are still a novel topic and that there is a dearth of studies on rebound effects associated with efficiency improvements in several resource categories.More research is particularly desirable with regard to effect sizes at different geographical scales and for social-psychological rebound effects."Nevertheless, we found evidence of the occurrence of strong rebound effects or even Jevons' paradox, particularly for increases in productivity and for more efficient use of irrigation water.Such effects must be considered to provide realistic estimates of resource savings that can be achieved through efficiency increases and sustainable intensification.The rebound effect framework presented in this article is designed to facilitate the required assessments.In our test case, we demonstrated how it can be used to scan future innovations in agriculture for potential economic rebound effects at the national scale.While such a broad scan over multiple innovations identifies areas where rebound effects are likely, an application of the framework to single innovations will allow a more detailed assessment of effect sizes.Where rebound effects are likely, policies that promote efficiency increases should aim to include measures for mitigating them and for safeguarding against Jevons’ paradox.Finally, this manuscript exclusively focussed on rebound effects and consequently ignored spill-over effects, where improvements in the use of one resource result in increased consumption of other resources.These effects must be included in comprehensive assessments of novel technologies and practices.This research was funded by the German Federal Ministry of Education and Research under the framework of the funding measure “Soil as a Sustainable Resource for the Bioeconomy – BonaRes”, project “BonaRes: BonaRes Centre for Soil Research, subproject A, B”.
Increasing the efficiency of production is the basis for decoupling economic growth from resource consumption. In agriculture, more efficient use of natural resources is at the heart of sustainable intensification. However, technical improvements do not directly translate into resource savings because producers and consumers adapt their behaviour to such improvements, often resulting in a rebound effect, where part or all of the potential resource savings are offset. In extreme cases, increases in efficiency may even result in higher, instead of lower, resource consumption (the Jevons paradox). Rebound effects are particularly complex in agricultural land and soil management, where multiple resources are used simultaneously and efficiency gains aim to lower the need for farmland, water, energy, nutrients, pesticides, and greenhouse gas emissions. In this context, quantification of rebound effects is a prerequisite for generating realistic scenarios of global food provision and for advancing the debate on land sparing versus land sharing. However, studies that provide an overview of rebound effects related to the resources used in agriculture or guidelines for assessing potential rebound effects from future innovations are lacking. This paper contributes to closing this gap by reviewing the current state of knowledge and developing a framework for a structured appraisal of rebound effects. As a test case, the proposed framework is applied to emerging technologies and practices in agricultural soil management in Germany. The literature review revealed substantial evidence of rebound effects or even Jevons’ paradox with regard to efficiency increases in land productivity and irrigation water use. By contrast, there were few studies addressing rebound effects from efficiency increases in fertilizer use, pesticide application, agricultural energy use, and greenhouse gas emissions. While rebound effects are by definition caused by behavioural adaptations of humans, in agriculture also natural adaptations occur, such as resistance of pests to certain pesticides. Future studies should consider extending the definition of rebound effects to such natural adaptations. The test case revealed the potential for direct and indirect economic rebound effects of a number of emerging technologies and practices, such as improved irrigation technologies, which increase water productivity and may thereby contribute to increases in irrigated areas and total water use. The results of this study indicated that rebound effects must be assessed to achieve realistic estimates of resource savings from efficiency improvements and to enable informed policy choices. The framework developed in this paper is the first to facilitate such assessments.
416
Energy efficiency and time charter rates: Energy efficiency savings recovered by ship owners in the Panamax market
As the energy efficiency of a ship, i.e. the amount of fuel consumed per unit of transport supplied, is a function of both the technical specification and the way in which a ship is maintained and operated, one can distinguish between “technical efficiency” which refers to some baseline conditions, and “operational efficiency” which takes into account the practicalities of the voyage, variability in environmental conditions and commercial realities of operations.One example of the former is the Energy Efficiency Design Index while a measure of operational efficiency can be obtained by taking measurements of fuel consumption and work over a period of time.Energy efficiency is expected to be an important feature for a firm operating a ship, as it influences its overall costs and revenues.There are a number of different markets in shipping where energy efficiency might be reflected in prices: the new build market, the second hand market and the charter markets, both voyage and time charter.In the voyage market, charterers hire ships on a given route and pay a fixed amount, which includes fuel consumption, while in the time charter market the daily price for hiring a ship excludes the fuel costs which are additionally borne by charterers.This article focuses on the time charter market as it represents a classical example of the principal–agent problem, also known as split incentive and tenant–landlord problem, although the verification of the agency problem on the level of investments is not tackled as the data do not enable us to compare the optimal and the observed level of investments.In the time charter vessel owners decide the level of technological energy efficiency, while charterers bear the costs associated with agent’s chosen level of energy efficiency, i.e. the fuel bill in the case of the shipping market.This paper quantifies the extent to which the fuel savings related to energy efficiency ships are captured by ship owners through higher charter rates using a linear regression of about 2000 fixtures in the Panamax dry bulk market observed between 2007 and 2012.This is an important endeavour, as this issue directly impacts the revenues of ship owners and therefore their incentive to invest in energy efficiency.Panamax refers to ships with deadweight of approximately 60,000–80,000 tonnes which are designed to have maximum capacity whilst being able to transit via the Panama Canal, although according to the taxonomy of the database used in this study Panamax ships range between 60,000 and 100,000 tonnes.We selected the Panamax dry bulk sector to carry out our analysis because of its reputation for being competitive.This sector was attributed a total of about 50 MtCO2 in 2007, i.e. about 5% of total sea transport emissions, a considerable quantity which is expected to increase due to higher demand for shipping services.The paper is structured as follows.Section 2 discusses econometric analyses of the time charter market, rewards of energy efficient investments in the shipping sector and the way energy efficiency can be defined.Section 3 describes the data we use in this article while Section 4 discusses the estimation and the result presented in this study.Section 5 draws the conclusions and the policy implications of our work before presenting recommendations for further work which could help take the analysis in this paper forward.Econometric studies modelling time charter and voyage rates can be grouped into two categories, i.e. those addressing the relationship among different rates, and those exploring the drivers influencing time charter and voyage rates.The existence of a relationship between time charter rates of different durations or between time charter and voyage rates is explained by the fact that a charterer has the option of entering into a single time charter contract for the whole period he needs a ship for, any mixture of time charter and spot contracts, or a number of voyage rates covering the routes they need to journey.Veenstra and Franses found the existence of a stable long-term relationship between the prices of six routes, three being served by capesize, the other by Panamax ships, all driven by one common trend.Berg-Andreassen reports that the conventional explanation of the time charter rates setting process is essentially correct: spot rate changes matter but spot rate levels do not.With regard to the studies discussing the drivers influencing time charter and voyage rates, fuel price has received considerable attention, which can partly be attributed to the debate on the use of market-based instruments to address CO2 emissions in the shipping sector and to the fuel costs estimated to be about 60% of ships’ costs in the current climate of high fuel prices and low charter rates.As voyage rates comprise costs related to fuel consumption, the relationship between these rates and fuel prices give an idea of the extent to which changes in fuel price are either absorbed by ship owners or passed to charterers.According to UNCTAD, owners of ships used in the iron ore trade passed changes in the fuel price entirely to charterers while only a third of the changes was passed in the wet bulk market, both findings being confirmed by Vivid Economics.In the case of grains, Vivid Economics reports that only 20% of the changes in fuel costs are passed on, about half the value estimated by Lundgren on the USA-Europe route.Findings discussed for the coal market in Chowdhury and Dinwoodie depend on the type of coal, the size of ships, i.e. Panamax or capesize, and the route.The average of all estimated models is very close to full transfer of fuel costs, considerably higher than the 40% estimated by Lundgren on the USA-Europe route.On the basis that most of the variables affecting voyage rates are likely to affect time charter rates, it is interesting to discuss the models estimated for voyage rates.The model in Chowdhury and Dinwoodie and Vivid Economics, for example, include bunker prices, trade volume and fleet size.An increase in trade is expected to cause an increase in the voyage rates through an increase in the demand for ships while an increase in the fleet is expected to cause a decrease in the rates.The model in Lundgren includes lay-up and change in trade, as supply and demand factor, respectively.The specification used in UNCTAD and Tsolakis introduces the commodity price among the variables used to explain voyage rates.The coefficient is found to be negative and statistically significant in UNCTAD but positive in Tsolakis for both the Panamax and capesize bulk carrier.As the effect of energy efficiency on time charters has not been explored by the academic literature, our literature review utilises anecdotal evidence derived from news articles and industry reports.Kollamthodi et al., based on an interview with the Norwegian Shipowners Association, claim that charterers are willing to pay higher rates for fuel efficient ships.In the container sector, a recent poll of twenty brokers showed that fuel efficiency was the single most important factor for the hiring of vessels on time charters, and that more efficient vessels obtain rate premiums compared to standard vessels.Maersk Line, one of the leading container companies, recently stated their willingness to pay for the retrofit of the vessels they charter in, as this results in a lower fuel bill for the company.This suggests that charters are willing to trade lower fuel bills for increased costs in the chartering of ships.In fact, analysts have argued for some time that a two-tier market is emerging, with charterers willing to pay more for efficient vessels and older and less fuel-efficient vessels losing out to modern tonnage.In the tanker sector, oil companies are reported to prefer newer vessels even if they are slightly more expensive.It is however unlikely that energy efficiency is fully reflected in the charter rates.Barriers, such as lack of reliable information on costs and savings from an energy saving measure as well as uncertainty as to whether the market will pay a premium for fuel efficient ships will result in sub optimal levels of investment.Based on a survey of five operators and seven other maritime stakeholders, Faber et al. conclude shipowners investing in fuel efficiency cannot recoup their investments, unless they either operate their own ships or have long term agreements with charterers.Wang et al. point out that charterers may be unlikely to pay a premium to reflect energy efficiency due to the diversity of the charter markets, where each sector comprises several subsectors reflecting cargo capacity, dimensions and other vessel characteristics, and to the difficulty in verifying fuel consumption claim made by the owners, a key factor in the split incentive problem discussed above.Interviews conducted by one author of this paper pointed to similar conclusions.Three ship owners/operators and a management company showed scepticism to the notion that charter rates reflect a premium for energy efficiency and notion that investments in energy efficiency sustained by the ship owners could be recouped.An argument for investment in energy efficiency is that it increases the success rate of winning contracts and therefore provides better utilisation rates of the ships, which may be an important factor particularly in an oversupplied market like the current one.According to a large bulk shipping owner, the company would still invest in energy efficiency even if not remunerated through charter rates as many major charterers require ships to comply with a certain level of environmental performance.This view has been recently confirmed by Lloyd’s List, according to which, Maersk and other operators in the container market, as well as operators of other vessel types that hire ships on long charters may not be willing to pay premiums for energy efficient vessels but they feel compelled to take in the better ships first, where owners of vessels with a lower performance are forced to start accepting lower rates or shorter contracts.It is interesting to note that less energy efficient ships are forced to accept lower rates, as discussed in Lloyd’s List, contradicts the statement that energy efficient is not remunerated through charter rates.Data on the fixtures between January 2007 and September 2012 were taken from the Clarksons Ship Intelligence Network.The SIN database contains information on date of the fixtures, name of the ship being chartered, build year, deadweight tonnage, start and end of laycan period, daily charter rates and length of the charters.Ships involved in the fixtures were matched, based on their IMO number, to the information from the Clarksons World Fleet Register database which contains information on gross tonnage of a ship, installed power, fuel consumption, bunker capacity, the build year and speed.It is worth stressing that all technical data in this dataset describes the design characteristics of ships rather than their operational performance.From the variables above we computed a simplified EEDI as described below.Next, we used the following variables from SIN to describe the state of the economy and the shipping industry: quantity of commodities carried by Panamax, fuel prices, the size of the Panamax fleet, and an index describing the rate prevailing in the market for one annual time charters.The quantity of traded commodity carried by Panamax has been computed by multiplying the total seaborne iron, grain and coal by the share carried by Panamax ships out of the quantity carried by all ship types.Information on the share of each commodity, taken from Stopford, is based on data from 2001 to 2002 and is assumed to stay constant across the period covered by our dataset, as we did not have access to data from more recent years.Finally, we computed the price of the mix of commodities carried by Panamax from the World Bank Pink Data.The price of each commodity was weighted by using its share out of the total cargo transported by Panamax ship in 2010.Data on the shares was sourced from UNCTAD.In time charter markets, the energy efficiency of ships is mainly communicated by ship owners through the fuel consumption and speed information provided during the negotiation process and ultimately reported in the contract.Actual performance can deviate from the values listed in the contract due to a number of factors such as weather, deterioration of the hull, deterioration of the engine, quality and calorific content of the fuel.Due to the difficulty of verifying information about a ship’s fuel consumption prior to engaging it in a charter, the charterer is normally protected by a guarantee which stipulates the acceptable range for speed and fuel consumption.During or on completion of the time charter, data can be collected to verify whether the advertised energy efficiency was achieved.As increased energy efficiency implies reduced energy consumption and therefore fuel bill for the charters, one would expect this to be a desirable attribute for somebody looking to hire a ship thus ship owners may overstate claims related to energy efficiency to attract business.In order to limit unsubstantiated claims, claims can be made against the guarantee.In addition, reputation in the industry to ensure repeat business, either for a specific ship or for the company owning and operating the ship may constrain short-term benefits from overplaying the energy efficiency of a ship.Unfortunately, energy efficiency performance is difficult to verify, even at the end of the charter, and claims against the guarantee can be difficult to pursue successfully.Fuel performance measurements are normally limited to collecting data about fuel consumed over a long period of time but very detailed information is required to assemble sufficient evidence to effectively pursue claims against the guarantee Some reduction in the uncertainty in energy efficiency and fuel consumption of a prospective charter can be obtained through the use of third party data.The recently introduced mandatory calculation of the EEDI will increase the quality of the data available to charterers for ships built after 2013, while some data is already publicly available for existing ships through tools such as the EVDI and other broker-held sources.After data cleansing, our dataset contains about 2000 observations.Filters to cleanse data were based on the characteristics of the ships from the Panamax market, e.g. dwt not falling between 60,000 and 100,000 or showing different values for any variables present in both datasets from where the data were sourced.The empirical distributions of the ship and fixture specific variables can be seen in Fig. 1.From the first histogram in Fig. 1a, one can see that the dataset used in this study comprises mainly relatively short fixtures, as testified by the peak in the distribution for contracts ranging between 100 and 200 days.Because of the changing market conditions in the years covered by the sample, the rates of the fixture do not show a distribution similar to the one of the duration of the contracts.As can be seen in the figure, most of the rates vary between 10,000 and 80,000 dollars per day but a significant right tail well into the 100,000 $/day can be noticed.In terms of the number of days between the signing of the contract the start of the laycan period, most contracts start relatively soon after the signing date.Ships chartered out in the fixtures in the dataset were built relatively recently when compared to the 25 year average lifetime of a ship.A reduction in the number of ships built in the last three years or so can be noticed in the figure.The distribution of the deadweight tonnage of the ships in the dataset reflects the market segment analysed in this study, with most of the ships size falling between 70,000 dwt and 75,000 dwt.Finally, in the last graph in Fig. 1a, bunker capacity shows quite a disperse distribution with values ranging between 1500 and 4500 tonnes, with three peaks near the 2100, 2700 and 3200 values.In Fig. 1b, one can notice that the consumption of most ships falls between 30 and 40 tonnes per day.In the next graph one can see that design speed takes mainly two values, 14 and 14.5.Finally, in the last two graphs in Fig. 1b, one can notice the similarity in the distributions of the simplified EEDI and installed power.Considering a correlation of 0.86 between the two variables in our dataset, one can conclude that information conveyed by the simplified EEDI index is of limited additional value compared to the information conveyed by installed power in the case of the Panamax dry bulk market.Fig. 2 shows the variables representing the condition of the economy and of the ship sector used in this study.Panamax fleet size has increased by about a third since January 2007, and so did the quantity of commodities carried by Panamax ships, although this increased demand is not reflected in the average time charter which collapsed in the last quarter of 2008 before bouncing back slightly in 2010 and falling again in the remaining part of the sample.The bunker price has fully recovered from the crash in the last quarter of 2008, with the price now close to the maximum in the sample.At a consumption of 32 tonnes per day, recent bunker price implies a daily fuel bill of about 19,000 dollars, nearly twice the recent charter rates on one-year contract.Finally, one can notice the similarities between the plots of the commodity carried by the Panamax ships and the bunker price.Estimation of the effect of energy efficiency on the time charter rates is carried out using the cross-section dimension of the dataset described above.We adopt the established General-To-Specific methodology which starts from a very general model likely to include irrelevant variables and narrows it down based on the statistical significance of the estimated parameters.The full list of variables includes age of the ship when the contract was signed; its gross tonnage, installed power on the ship measured in horsepower, consumption of fuel measured in tonnes per day, quantity of fuel which can be loaded onto the ship, design speed of the ship; simplified EEDI, as described above, the fuel price in Rotterdam; fleet size measured in total dwt, price of the commodity carried by Panamax ships, traded quantity carried by Panamax ships, number of days where loading of the cargo is allowed without the need of paying the charter rate; length of the charter and number of days between the signing of the contract and the start of the charter, and in the case of relative models the 1 year time charter rate for Panamax ships.Estimated models have been judged on the basis of their fit, on whether the sign of the coefficients conform to theory, and on the basis of the comparison between the effect of energy efficiency imputed by the model and the maximum rational effect.Both models with variables in logarithms and levels have been estimated although only models with variables in levels are discussed as taking the logarithms of the variables did not reduce the heteroscedasticity of the residuals while making the interpretation of the results less intuitive.The imputed effect of fuel efficiency is computed by using the estimated coefficient on the variable of interest and the difference between the value of the variable representing energy efficiency for a specific ship and the average value in the fleet.The maximum rational effect is computed by multiplying the fuel price observed at the contract date by the difference between a ship’s fuel consumption and the average in the fleet.Overall, one would expect imputed effect to be smaller than the maximum rational effect and the two effects to have the same sign for most of the fixtures observed in our dataset.Table 1 shows the coefficients of the estimated models.Model 1 comprises a number of variables referring to technological characteristics of the ships.Unfortunately, we discard this model as:it includes Gross Tonnage rather than the more meaningful Dead Weight Tonnage, and the sign on the former variable is contrary to expectations.As higher values of Gross Tonnage enable a bigger cargo, one would expect a positive effect of this variable on the charter rates;,ship-based variables in the model, with the exclusion of Age, act as a single block as a consequence of the correlation across them and estimated coefficients.1,As soon as one ship-based variable is dropped, the significance of the whole block of variables falls apart, and the coefficient on the intercept decreases to compensate for the change.Dropping these non-statistically significant technological variables with the exception of EEDI leads to this variable becoming non-statistically significant, and the value of the coefficient being just 2% of the value in Model 1;,the model delivers implausible results in terms of imputed effect of energy efficiency.Average between imputed and maximum rational effects is −1.5 rather than being positive and smaller than one, as one would expect, while the percentage of fixtures for which the sign of the imputed and the maximum rational effects agree is a very low 53%.Continuing the search specification delivers Model 2 in Table 1.As can be seen in the table, no significant changes between the coefficients in Model 1 and Model 2 can be observed, with the exception of the intercept.In Model 2 and Model 3, the latter obtained after dropping commodity price, an increase in the age of the ship decreases the time charter, each additional year decreasing the rate by about 350 dollars per day.The amount of fuel which can be loaded onto a ship is a negative attribute, probably because it reduces the cargo carrying capacity, with each additional tonne decreasing the time charter by about 2.5 dollars per day.Large fleet size causes a decrease in the time charter rates because of increased supply of shipping services, with each additional Million DWT causing a decrease of 2000 dollars per day.Trade has a positive effect on time charter rates, as a consequence of increased demand for shipping services, with each additional Million Tonne causing an increase ranging between 1700 and 1000 dollars per day.With regard to the start of charter period, charters seem to have a preference for early starts, each day between the signing of the contract and the start of the laycan period causing a decrease of 60 dollars in the charter rate.Model 2 in the table includes the price of the commodity carried by Panamax ships.Commodity prices are incorporated in Tsolakis and UNCTAD, although these articles disagree on the sign of the coefficient.The rationale for including commodity price is that the rate is correlated with the value of the cargo, according to UNCTAD, while commodity price is said to act as a proxy for transportation demand according to Tsolakis.With regard to fuel prices, it is not entirely straightforward why time charter rates are positively influenced by this variable, as it occurs in Table 1, with a possible explanation, as suggested by a referee, related to the fact that high prices would encourage slow steaming and therefore a larger demand for tonnage caused by increased journey time.Both coefficients on the commodity and fuel price take somewhat high values.In Model 2, an increase in the fuel price of 10 dollars per tonne increases the time charter rates by about 1000 dollars per day even though the average increases in fuel expenditure for Panamax ship would be only about 350 dollars.In the same model an increase in the commodity price of the same amount would result in an increase in the rate of about 2000 dollars per day.Dropping the commodity price from Model 2 causes a substantial increase in the coefficient on the fuel price in Model 3, which is expected considering the correlation between the two variables – see Fig. 2.In order to cast some light on these two issues Model 2 has been estimated on rolling and on recursive samples.After ordering the dataset according to the contract date, the first method implies estimating Model 2 on the first 1000 observations, shifting the sample by 200 observations, i.e. adding the next 200 observations while dropping the first 200, and re-estimating the model.This is repeated until there are no observations left to add.Recursive sampling differs from the procedure above as no observations are discarded when new ones are added.If the relationship between fuel price and commodity price on one side, and time charters on the other is spurious we would expect the value of the coefficient to show considerable instability across samples.As one can see in Fig. 3, estimated coefficient on fuel price is fairly stable across rolling samples with the exception of the spike observed when using only observations between September 2007 and January 2010.The same can be said in the case of trade coefficient, although the spike is observed in the first sample.In the case of the fleet variable one has the impression that two very different market conditions are incorporated in the dataset, the values of the coefficient in the first three samples being almost three times the values in the remaining samples in Fig. 3, a fact which may be explained by the rigidity of supply function described in Koopmans.Finally, the value of the coefficient of commodity price is rather unstable, confirming the divergence reported by Tsolakis and UNCTAD.As one can see in Fig. 3, estimated coefficient is about 300 or 50 units either side away from zero.Recursive estimation, not shown in the figure, delivers much more stable values in the coefficients across samples due to the increasing number of observations which smooth out the impact of additional data points.Bearing in mind the instability in Fig. 3, one may be reluctant to include commodity price in the equation determining the value of time charter rates.The argument on the correlation between value of the cargo and charter rates, as discussed in UNCTAD, may hold across commodities but it does not seem very stable across time, at least in the case of the dataset discussed in this study.The estimation of the coefficient on fuel price is relatively stable across rolling and recursive sample therefore making spurious relationship unlikely.A positive coefficient on fuel price could be explained by this variable acting as a proxy for the economic activity and inflationary expectations in the economy.In term of model fit one can notice that dropping the commodity price does not have a considerable effect on the value of the adjusted R2 and Akaike Information Criterion – compare Model 2 and Model 3 – while dropping fuel price decreases the values of the adjusted R2 by about 30 percentage points – compare Model 3 and Model 4.It would seem desirable to assess whether the fuel price has a legitimate role in the equation of the time charter rates or whether the results in Table 1 are an artefact of the time span used in this study.This is clearly an undertaking requiring a dataset spanning a much longer timespan than that used in this study.Based on feedback from industry practitioners and the results discussed above, we dropped Gross Tonnage and EEDI, respectively, from the general model from where the search specification is started.After doing so, we still faced the issue of whether commodity and fuel price should be included in the model delivered by the search specification, as discussed above.In the models including fuel and commodity price or fuel price only, the estimated coefficient on consumption is about −150 implying an average of the ratio between imputed savings and maximum rational savings of 38%.We decided to discard these models, as the coefficient on consumption is not statistically significant across rolling samples.In the case of the model incorporating neither the fuel nor the commodity price, the coefficient on fuel consumption is about three times the values discussed above.Average of the ratio between maximum rational savings and imputed savings is very close to unity, implying full transfer of fuel savings realised by more efficient ships to ship owners.Unfortunately, in about a third of the fixtures, imputed savings are higher than maximum rational savings.In addition, adjusted R2 decreases by about 30 percentage points when fuel price is not used in the estimation.These two reasons lead to these models being discarded.The remainder of this section discusses models estimated based on a relative rather than an absolute approach.In the former, that has been adopted so far, independent variables, including a proxy for energy efficiency, explain time charter rates in the fixtures.In the relative approach, one introduces among the independent variables a benchmark, which captures common factors affecting all fixtures, e.g. conditions of the industry and the economy, while the remaining independent variables explain the difference between this benchmark and the rate in each fixture.If variables related to the condition in the economy and the industry still have an effect on a certain fixture, one would expect a smaller coefficient compared to those in Table 1, as part of their effect is being captured through the introduction of the benchmark.We have estimated models incorporating variables based on fuel consumption and fuel expenditure, the latter measured by the difference between fuel expenditure of a ship and the average in the fleet, or the average in the ships with similar design speed.As similar results have been obtained regardless of the variable used in the model, only models using fuel expenditure are presented, on the grounds that one can read directly the percentage of monetary savings passed on to ship owners from the coefficient in the model.As it is not a priori clear whether variables describing the conditions of the shipping industry and the economy should have an influence on the differential between a fixture rate and the benchmark, we decided to incorporate all the variables discussed so far in the general model from which search specification was started.The second column of Table 2 shows the model obtained by dropping all non-statistically significant variables.Surprisingly, fuel and commodity price are preferred to fleet size and trade, i.e. the variables traditionally used to represent the conditions of demand and supply in the shipping industry.Estimated coefficients on fuel and commodity price are much smaller than those in Table 1, confirming our expectations above.As we do not have strong prior views on whether variables describing the condition of the economy and the shipping market should be influencing the charter rates in a relative approach, we have estimated a model with ship-specific variables only and a model incorporating fleet and trade as proxy for supply and demand.While the Akaike Information Criterion points to Model 1, there is almost no difference across models according to the adjusted R2.In addition, the value of the coefficient on the difference from average fuel expenditure is about 0.40 across all models in Table 2, implying that about 40% of the financial savings arising from reduced fuel consumptions are recouped by the owners through increased charter rates.It is worth stressing, as discussed above, that a similar figure was found in the specifications obtained when following the absolute approach discussed in Section 4.2.While those models were discarded because the coefficients were not robust across samples, all models in Table 2 display statistically significant coefficients across recursive and rolling samples.It is interesting to discuss how the value of the coefficients on the difference between a ship’s fuel expenditure and the average in the fleet changes across time in presence of the very different market conditions.As shown in Fig. 4, one can see an overall decreasing pattern in the absolute value of the size of fuel savings captured by shipowners, starting at about 50% and almost halving when using samples geared toward the ending period of our data.The value in this graph have been used by implementing the recursive and rolling sample procedure discussed above.This decreasing pattern in the fuel savings captured by shipowners can be justified by a number of factors:when market was tight fuel efficient ships were rewarded through price premium, while in a market with considerable spare capacity they might be increasingly rewarded through higher utilisation rates;,as fuel efficient ships becomes more common, their relative premium decreases if potential charters have more and more fuel efficient ships to choose among, as pointed out by a referee;,slow-steaming has become more common in an environment characterised by high fuel prices and low charter rates.As is widely known, the difference in fuel consumption between a fuel efficient ship and one with a standard design is much smaller when slow steaming than at design speed.For this reason one would expect a decreasing trend in the size of the fuel savings evaluated at design speed, i.e. the variables in our study, incorporated in the time charter market.As savings computed at design speed overestimate the effective values captured when slow steaming, the coefficient in the model needs to decrease even though the percentage of savings captured by the shipowners stats constant.It is finally worth mentioning that our findings in relation to the size of fuel savings across time are rather disappointing for ship owners of fuel efficient ships, bearing in mind the caveat mentioned above.In fact, before the time charter crash they benefited from higher rates and higher percentage of fuel savings incorporated in the charter price.Unfortunately, both income streams decreased after the market crash.This paper presents the first estimation in the literature of the extent to which energy efficient ships are rewarded in the market place for the fuel savings they deliver.Confirming evidence form the press industry, we discovered that only part of the financial savings from energy efficiency accrues to ship owners.More specifically, we found that on average only 40% of the financial savings based on design efficiency accrue to the vessels’ owners, i.e. the party deciding the efficiency level of the ships, with the estimated percentage showing a decreasing trend going from 50% when using the oldest using only the first 1000 fixtures in our sample to 25% when using the most recent 1000 fixtures.From an environmental perspective, this is discouraging because ship owners will not invest in energy efficiency as much as they would if they could retain the whole amount of savings.As this is likely to be due to the lack of information and difficulty in verifying fuel consumption claims made by ship owners, we believe that any policy facilitating the flow of information on energy efficiency between charterers and owners, would incentivise the level of investment in energy efficiency in the shipping sector and consequently reduce emissions from the sector.CO2 emissions from the shipping sector have considerably increased over the last decade, and with the rising influence of Asian countries and their exports, they are likely to continue to do so.Considering the limited decarbonisation options available, increasing energy efficiency is a valuable contribution to reducing emissions in the shipping sector.Our findings are important against the background of the current climate policy in the shipping sector and the policies encouraging the uptake of energy efficiency.If energy efficiency is not adequately rewarded, then it can be introduced only by mandatory standards which have to be abided by.On the other hand, if more energy efficient ships can charge a premium on the market, owners will have an incentive to invest in reducing fuel consumption.In the latter case, the market will help achieving carbon reduction by rewarding environmentally-sound behaviour.The fact that savings delivered by energy efficiency do not accrue entirely to the ship owners can have two explanations.The first is related to the fact that economic benefits of each transaction are shared between the seller and buyer depending on bargaining power.Any savings due to energy efficiency will be naturally divided between ship owners and charters according to how strong demand for ships is compared to available supply.In fact, our rolling estimation confirms that our estimate of the percentage of the savings accruing to the owner decreases as observations from the peak of the market are dropped from the estimation sample, as can be seen in Fig. 4.In a tight market ship owners have increased bargaining power and therefore those with more energy efficient ships are more likely to be fully rewarded for the saving enjoyed by charters.From a policy point of view, there is not much one can do about this, as it seems inadequate to advocate a tight shipping market for the benefit of the uptake of energy efficiency.The second explanation of our findings is related to the enforcement of contractual clauses.As fuel consumption clauses included in the time charter contract may be difficult to verify, charterers may find it difficult to enforce the fuel consumption guarantee on the contract due to the related legal expenditure.In these settings, it would seem perfectly reasonable for time charterers to be reluctant to pay the full premium for energy efficient ships.To the extent to which our results are caused by this factor, policy has a considerable role to play.Any instrument facilitating the diffusion of information or reducing the costs of holding ship owners accountable to their energy efficiency claims will help increase the maximum amount that time charterers are willing to pay for the increased energy efficiency and stimulate the uptake of energy efficient investments.Any type of policy facilitating the communication between charters and owners such as the EEDI is helpful to increase the revenues accruing to owners of energy efficient vessels.Other helpful policy measures may be related to the provision of operational data which can be used by charters to verify fuel consumption such as weather data.As the operational energy efficiency of a ship is also influenced by conditions of the ship, such as the deterioration of the hull and engine, any labelling scheme signalling to potential charters the operational performance of a ship will be helpful in facilitating the uptake of energy efficiency investments.Similarly, setting up a registry or database detailing consumption of the ship in previous charters can help potential charters quantify the financial savings which can be expected.Throughout the discussion in this paper it is easy to identify several areas where further work would be beneficial.It is possible that dubious relationships within the variables in the model are caused by variables not considered in the model, for example it is possible that the positive relation between fuel price and charter rate can be explained by more slow-steaming and thus a larger demand for tonnage.Capturing this relationship would require additional data sources and one potential data source is the satellite automatic identification system with which it is possible to determine the operating speed of ships.Secondly, we discussed that the key finding of this paper, i.e. the percentage of savings recovered by shipowners is directly related to market supply and demand and the extent to which this is captured in the modelling.Possible improvements and refinements can come from and extended data set covering a longer period of time, to even out the supply demand imbalances witnessed in the period used in this paper.Unfortunately the Clarkson’s data set does not go beyond that which has been used in this paper and alternative sources such as IHS Fairplay are also for a shorter period.
This paper presents the first analysis on how financial savings arising from energy efficient ships are allocated between owners and those hiring the ships. This as an important undertaking as allocation of financial savings is expected to have an impact on the incentives faced by ship owners to invest in more energy efficient vessels. We focus on the dry bulk Panamax segment as it contributes to around 50Mt (5%) of total CO2 emissions from shipping in 2007 and therefore its importance in terms of environmental impact should not be neglected. The time charter market represents a classical example of the principal-agent problem similar to the tenant-landlord problem in the buildings sector. We discovered that on average only 40% of the financial savings delivered by energy efficiency accrue to ship owner for the period 2008-2012. The finding that only part of the savings are recouped by shipowners affecting their incentives towards energy efficiency could consequently have implications on the type of emission reduction policies opted at both, global and regional levels. © 2014 The Authors.
417
The global burden of disease attributable to alcohol and drug use in 195 countries and territories, 1990–2016: a systematic analysis for the Global Burden of Disease Study 2016
Alcohol and other drugs have long been consumed for recreational purposes.1,So-called illicit drugs are substances for which extramedical use has been prohibited under international control systems.2,Illicit drugs include, but are not limited to, opioids including heroin, morphine, opium, and other pharmaceutical opioids; cannabis; amphetamines; and cocaine.Harms can also occur due to extramedical use of prescription drugs.In this Article, we will refer to all use of drugs as drug use."Dependence on illicit and prescription drugs can develop among people who use them regularly over a sustained period, and is characterised by a loss of control over use and increased prominence of use of the substance in a person's life. "The ICD 10th edition definition,3 which was broadly similar to the American Psychiatric Association's DSM-IV definition,4 requires that at least three of the following criteria are met: a strong desire to take the substance; impaired control over use; a withdrawal syndrome on ceasing or reducing use; tolerance to the effects of the drug; a disproportionate amount of time spent by the user obtaining, using, and recovering from drug use; and continuing to take drugs despite the problems that occur.Substance use also carries risks of other adverse health outcomes.For example, injection of drugs carries risks if non-sterile injecting equipment is used, because of potential exposure to HIV and viral hepatitis, other infections, and other injection-related injuries and diseases such as sepsis, thrombosis, and endocarditis.5,Alcohol use increases the risk of unintentional and intentional injury, and both non-communicable and infectious diseases.1,6,Use of both alcohol and drugs can cause harm to others.7,Evidence before this study,We did a systematic review of PubMed, EMBASE, and PsycINFO for epidemiological studies of prevalence, incidence, remission, duration, and excess mortality associated with substance use and substance dependence published between Jan 1, 1980, and Sept 7, 2016, without language restrictions.Full search terms are listed in the appendix.We also searched grey literature, and supplemented our search through consultation with experts.Previous Global Burden of Disease studies have provided evidence on overall burden attributable to alcohol and drug use and more detailed assessment of alcohol and drug use burden, but with each iteration of GBD, new data and improvements to methods provides better estimates of this burden.Other organisations, including WHO and the UN Office on Drugs and Crime, periodically produce estimates of health consequences of alcohol and drug use.This GBD study provides the first detailed peer-reviewed estimates of attributable burden due to both alcohol and drug use available for all locations between 1990 and 2016, directly contrasting the prevalence and burden due to these different substances.Added value of this study,We provide clear comparative analysis of alcohol and drug epidemiology and attributable burden.The results of this study show that considerable geographical variation exists with regard to the magnitude and relative contribution of alcohol and drug use to disease burden.To the best of our knowledge, this study is the first to provide estimates of the association between alcohol and drug attributable burden and sociodemographic development.Our results show that burden attributable to alcohol and drug use is strongly associated with socioeconomic development, and its composition varied across Socio-demographic Index quintiles.Other consequences of alcohol use were much larger causes of disease burden than alcohol use disorders, and many of these were much more common in countries with a lower SDI than those with higher SDIs.Drug-attributable burden was higher in countries with higher SDI than those with a lower SDI, and most of this burden was attributable to drug use disorder, rather than other consequences of drug use such as HIV/AIDS, acute hepatitis, liver cancer, cirrhosis and other liver disease due to hepatitis, or self-harm.Implications of all the available evidence,Alcohol and drug use cause substantial disease burden globally, and the composition and extent of this burden varies between countries and is strongly associated with sociodemographic development.Since 1990, there has been a considerable increase in the number of people with alcohol and drug use disorders globally, driven by population growth and population ageing.Age-standardised prevalence also increased for opioid, cocaine, and amphetamine use disorders.The prevalence of substance use disorders varied substantially by substance and across countries, with clear differences between different geographical regions.Alcohol and drug use contribute substantially to the global burden of disease, not only through substance use disorders but also from other disease consequences resulting from use.For example, a high proportion of disease burden attributable to alcohol was due to other outcomes, including unintentional injuries and suicide, cancers, and cirrhosis, and the consequences of chronic hepatitis C infection make a substantial contribution to the burden attributable to drug use.Interventions that reduce the prevalence of these other health outcomes are available and need to be scaled up, but this scaling remains a challenge even in high-resource settings.Since 1993, estimates of the causes of global disease burden have used disability-adjusted life-years,8 which combines measures of disease burden caused by premature mortality and burden due to disability.The comparative risk assessment approach developed for GBD provides a conceptual framework for population risk assessment of exposures to risk factors and their attributable health burden;9 alcohol and drugs are included as risk factors in this approach.Each iteration of GBD has updated estimates of modelled prevalence of alcohol and drug use disorders, burden due to those disorders, and burden attributable to alcohol and drug use.Improved methods are used in each iteration of GBD, with increased data coverage, and better strategies to inform the modelling that occurs in GBD.In this Article, we use data from the Global Burden of Diseases, Injuries, and Risk Factors Study 2016, to estimate the prevalence of alcohol and drug use disorders, and to calculate the burden attributable to alcohol and drug use globally and for 195 countries and territories within 21 regions and seven super-regions between 1990 and 2016.We present global and regional estimates of alcohol, amphetamine, cannabis, cocaine, and opioid use disorders; report disease burden attributable to each of these disorders in terms of YLDs, YLLs, and DALYs; summarise burden due to alcohol and drug use as risk factors for other health outcomes; and analyse the association between alcohol-attributable and drug-attributable burden and Socio-demographic Index quintiles.GBD 2016 also quantified burden attributable to alcohol and drug use as risk factors for other health outcomes in the comparative risk assessment.26,The comparative risk assessment method estimated the burden from a risk factor attributable to an exposure compared with an alternative exposure distribution.9,For drugs, the counterfactual exposure distribution was no use of the substance in the population; for alcohol it was between 0–0·8 standard daily drinks.Literature reviews were done to estimate relative risks for dimensions of substance use as a risk factor for other health outcomes to which alcohol and drug use was considered causally linked for dimensions of alcohol and drug use as a risk factor for other health outcomes.Disease burden associated with the characteristics of alcohol consumption patterns has been explored in detailed elsewhere.27,Causality was established by standard epidemiological criteria with an attempt to be comparatively similar across all risk factors included in the GBD comparative risk assessment.On the basis of exposure and relative risk, population attributable fractions were calculated, which denote the burden of disease that could have been avoided if individuals were not exposed to substances.28,The substance-attributable burden was calculated by multiplying the attributable fractions with the respective burden estimates.The alcohol and drug use risk factor outcome pairings included in GBD 2016 are summarised in the appendix.RR estimates were used together with a mixed effect meta-regression with age-integration from DisMod ordinary differential equations to calculate PAFs.PAFs were multiplied by relevant cause-specific DALYs to calculate attributable burden.Details of the comparative risk assessment modelling process have been published in full elsewhere.26,Comparative risk assessment requires exposure and relative risk for pairings that have been defined as causally linked.Alcohol exposure was modelled as a continuous risk factor for the dimension of level of consumption, and for injury and coronary heart disease outcomes, patterns of drinking were added as an additional dimension.Exposure to alcohol was based on a triangulation of survey and sales data,29 as described previously.14,27,30,For opioid, amphetamine, and cocaine as risk factors for suicide, dependent use of these substances was the defined exposure.14,For injecting drug use as a risk factor for HIV, we extracted data on the proportion of notified HIV cases by transmission route from global HIV surveillance agencies.14,31,For injecting drug use as a risk factor for hepatitis C and hepatitis B viruses, we used a cohort method, estimating the accumulated risk of individuals having incident hepatitis B and C due to injecting drug use.We pooled data on injecting drug use in DisMod-MR 2.1, did a meta-analysis of hepatitis B and hepatitis C incidence among people who inject drugs, and estimated the population-level incidence of hepatitis B and C since 1960.14,31,The SDI is the geometric mean of total fertility rate, income per capita, and mean years of education among individuals aged 15 years and older, which was included as a composite measure of developmental status in GBD 2016.The index is similar to the human development index.SDI scores range from 0 to 1.To calculate the SDI, these three attributes were rescaled whereby 0 was the lowest value observed between 1980 and 2016, and 1 was the highest observed value.Each GBD location was allocated an SDI score for each year.In this study, we investigate the association between SDI and DALYs attributable to alcohol and drug use.The funder of the study had no role in study design, data collection, data analysis, data interpretation, or the writing of the report.All authors had full access to the data in the study and final responsibility for the decision to submit for publication.Globally, alcohol dependence was the most prevalent of the substance use disorders, with 100·4 million estimated cases in 2016.The most common drug use disorders in 2016 were cannabis dependence and opioid dependence.Amphetamine and cocaine dependence were less common, with cocaine dependence the least common.Across all substance use disorders, age-standardised prevalence was significantly higher for men than women, with the exception of other drug use disorders.Between 1990 and 2016, the global prevalence of all substance use disorders increased for both men and women.Conversely, age-standardised prevalence decreased for all substance use disorders, with the exception of the other drug use disorders group.The estimated decrease in age-standardised prevalence was greater among women than men for all substance use disorders.For all ages, the increase in prevalence of disorders was greater among men than women, with the exception of other drug use disorders, whereby the increase in prevalence was greater for women than men.Substantial regional variations were observed in the estimated prevalence of substance use disorders.Australasia was among the regions with the highest age-standardised prevalence across all drug use disorders, and age-standardised prevalence of amphetamine dependence was highest in this region.High-income North America had the highest prevalence of cannabis, cocaine, and opioid dependence.The prevalence of alcohol use disorders was highest in Eastern Europe.The disease burden of substance use disorders varied substantially by region and country, reflecting variations in prevalence.Global DALYs attributable to alcohol use were highest for injuries, cardiovascular diseases, and cancers.Drug-attributable DALYs were highest for drug use disorders, cancers and cirrhosis driven by chronic hepatitis C infection due to injecting drug use, and HIV.There were similar patterns for deaths, YLLs, and YLDs.Overall, 2·8 million deaths were attributed to alcohol use, and 452 000 deaths were attributed to drug use.The estimated number of deaths, YLLs, YLDs, and DALYs attributed to alcohol and drug use varied considerably between regions.The highest alcohol-attributable burdens were in Eastern Europe and Southern sub-Saharan Africa.The highest drug-attributable burdens were in Eastern Europe and high-income North America.In terms of absolute burden, the largest number of alcohol-attributable DALYs were in East Asia, South Asia, Eastern Europe, and Tropical Latin America, and the largest number of drug-attributable DALYs were in East Asia, high-income North America, South Asia, and Eastern Europe.Globally, in 2016, 99·2 million DALYs and 4·2% of all DALYs were attributable to alcohol use, and 31·8 million DALYs and 1·3% of all DALYs were attributable to drug use.Globally, alcohol accounted for around three-quarters of all substance-use-attributable DALYs.Drugs accounted for a higher percentage of substance-use-attributable DALYs than alcohol in two regions: high-income North America and North Africa and the Middle East.The proportion of DALYs attributable to alcohol and drug use, and the contribution of each to overall DALYs, varied substantially, both in the absolute and relative data, at the regional level.Alcohol attributable burden was largest for Eastern Europe and Central Europe, whereas alcohol attributable burden was smallest for North Africa and the Middle East and Western sub-Saharan Africa.The regions where drug use accounted for the highest proportion of DALYs were high-income North America, Eastern Europe, and Australasia.Drugs accounted for the smallest percentage of total DALYs in several regions in sub-Saharan Africa.The overall disease burden attributable to alcohol and drugs varied substantially between countries.The countries with the highest alcohol-attributable age-standardised DALYs per 100 000 people were in Eastern Europe, and sub-Saharan Africa.Countries with the highest age-standardised DALYs per 100 000 people attributable to drug use were the USA, Russia, and Mongolia.The distribution of diseases or injuries that contributed to alcohol and drug-attributable burden varied by GBD region.HIV accounted for a large proportion of disease burden attributable to drug use in African regions.Drug use disorders were the largest contributor to drug-attributable burden in almost all regions, particularly Australasia, high-income North America, and North Africa and the Middle East.Alcohol burden was attributed to a wider variety of diseases and injuries than drug burden.Disease burden attributable to alcohol and drug use was strongly associated with socioeconomic development, and burden composition varied across SDI quintiles.Alcohol use disorder was a much smaller cause of disease burden than other consequences of alcohol use, and many of these were much more common in countries with a low SDI.Tuberculosis and lower respiratory infections were considerable causes of burden due to alcohol use in low SDI countries, whereas middle SDI and high-middle SDI countries had larger alcohol-attributable cardiovascular disease burden.Drug-attributable burden was higher in countries with higher SDI, and most of this burden was due to drug use disorder.HIV/AIDS was a larger cause of drug-attributable burden in low SDI countries than high-middle SDI countries, whereas the consequences of chronic hepatitis C virus infection were greater in countries with higher SDI.Since 1990, the number of people with alcohol and drug use disorders has increased substantially, driven by population growth and population ageing.Age-standardised prevalence also increased for opioid, cocaine, and amphetamine use disorders.The prevalence of substance use disorders varied markedly by substance and across countries, with clear differences between different geographic regions.Substance use disorders were not the only conditions that contributed to the global burden of disease attributed to alcohol and drug use.A high proportion of the disease burden attributable to alcohol was due to increased risk of other health outcomes, including unintentional injuries and suicide, cancers, and cirrhosis, and the consequences of chronic hepatitis C infection make a substantial contribution to the disease burden attributable to drug use.Disease burden attributable to alcohol and drugs and the composition of this burden varied substantially across geographical locations.Eastern Europe had the highest age-standardised attributable burden for alcohol, followed by southern sub-Saharan Africa, and the highest age-standardised attributable burden for drug use was in high-income North America.The association between geographical differences in attributable burden and SDI varied for alcohol and drugs.Countries in low SDI and middle SDI quintiles had the highest alcohol-attributable burdens, whereas countries in high SDI quintiles had the highest drug-attributable burden.Globally, large variation in attributable burden has been observed within countries with the highest SDIs.32,In GBD 2013, in Finland, overall age-standardised alcohol-attributable DALYs were 1567 per 100 000 in 2013, whereas in Norway, with the lowest burden, corresponding estimates were 698 per 100 000; DALYs in Denmark were 1530 per 100 000 and in Sweden 950 per 100 000.All countries had a similar attributable disease pattern and the majority of alcohol-attributed DALYs were due to YLLs, mainly from alcohol use disorder, cirrhosis, transport injuries, self-harm, and violence.32,The high attributable burden, even in high-income countries, where a substantially higher proportion of health budgets are spent to address these issues, deserves attention.Multiple factors might contribute to this burden, including low treatment rates, delays in initiating treatment, and stigma associated with alcohol and substance use disorders.33–35,A longstanding problem in most countries36 is also the poor availability of highly effective interventions that can address HIV and hepatitis C virus among people who inject drugs, such as needle and syringe programmes, HIV and hepatitis C virus treatment, and opioid substitution therapy.The emergence of alcohol-attributable burden in Southern sub-Saharan Africa reflects the changing strategies of the alcohol industry, which has started to target Africa and other low-income and middle-income countries37–39 in the past few years to avoid the stricter regulation of the market and public health initiatives in high-income countries, where consumption has been steadily falling.This calls for the global health community to respond adequately to accelerate efforts toward development of a framework convention for alcohol control,40 similar to that which has been implemented to counter the harmful effects for tobacco consumption.Many of the causes of alcohol and drug burden can be prevented or treated.Taxation and regulation of availability and marketing can substantially reduce harms associated with alcohol.41,Additionally, reducing the alcoholic strength of beverages and minimum pricing show promise in reducing alcohol-attributable harm.42,43,Transport injuries are an important consequence of alcohol use that can be prevented via a range of interventions.Treatment and brief interventions have been shown to be effective with a potential public health impact,35 but of all mental health disorders, alcohol use disorder has the lowest treatment rates globally.44,Medications for alcohol dependence such as naltrexone have shown efficacy, but uptake and adherence are very low; for example, in Australia, only around 0·5% of people who are alcohol-dependent are estimated to have been prescribed naltrexone or acamprosate for the recommended 3 month duration.45,Psychosocial interventions might assist people with cannabis and psychostimulant use disorders.46,47,Opioid substitution therapy reduces opioid use and injecting risk, improves physical and mental wellbeing, and reduces mortality.48–50,Opioid overdose might also be reduced by distributing the opioid antagonist naloxone to reverse overdoses.51,Much of the burden due to infectious disease among people who inject drugs could be averted by scaling up needle and syringe programmes, opioid agonist therapy, and HIV antiretroviral therapy.52–55,However, coverage of these interventions remains low.56,The development of highly effective hepatitis C virus treatments has the capacity to increase rates of hepatitis C virus treatment among people who inject drugs,57 and might produce secondary prevention benefits, similar to those observed with HIV treatment.57,One of the biggest barriers to the scale up of hepatitis C virus treatment will be the high cost of these medications.It is also crucial to acknowledge that for people who inject drugs, without coverage of blood borne virus prevention interventions, such as needles and syringe programmes and opioid agonist therapy, the preventive effects of hepatitis C virus treatment will be limited.The limitations of the GBD approach have been described previously.11–14,16,Such limitations include gaps in data, variable data quality, and controversies with regard to disability weights used to estimate non-fatal disease burden.Although this study modelled results where data were not available, the gaps in data for many countries result in uncertainty around the modelled estimates, which can only be reduced by improved epidemiological evidence.Alcohol consumption consists of recorded and unrecorded consumption data and is subject to uncertainties.The absence of a gold standard method in measurement of the prevalence of drug use poses major challenges for cross-national comparisons.An important limitation of all substance use cause of death estimates was variation in ICD-codes used to classify overdoses across countries, which will be improved in the next GBD.The distribution of substance use disorders across levels of severity was informed by analyses of data from the USA and Australia.The extent to which the severity distribution of substance use disorder cases is consistent across countries is an important question.For example, in countries where patterns of use or effects of use on functioning are more severe, a greater proportion of disorders might be classified as severe, and potentially fewer people are classified as having no disability.Thus, our estimates might have underestimated burden associated with use disorders.Studies that investigate not only the prevalence of substance use disorders but also levels of severity of those cases across countries that vary culturally, economically, and socially are needed to ascertain whether the severity distribution of substance use disorder cases has been incorrectly estimated, and if so, the magnitude of the error.The GBD uses the ICD-10 classification system for injuries and diseases."The introduction of the American Psychiatric Association's DSM-5 included a shift from DSM-IV's abuse and dependence,4 to a category of use disorder.58",There has been some discussion regarding the consistency of substance use disorder definitions used in DSM-5 compared with other classification systems, with results suggesting that moderate agreement exists between moderate to severe DSM-5 substance use disorders and ICD-10 dependence,59–61 but prevalence of DSM-5 moderate to severe substance use disorder might be higher than that of DSM-IV and ICD-10 dependence,59,62 implying that estimates of substance use disorder burden might have been higher if DSM-5 prevalence estimates were used.Research examining the methodology used in GBD to generate disability weights, namely paired comparison responses, has revealed the method suggests that the approach of simultaneous estimation of cardinal severity values from a pooled dataset with a combination of responses to chronic and temporary paired comparisons is a reasonable methodological choice.20,21,Disability weights can be generated by health-care professionals, individuals with the disorder, or the general public.Arguments can be made for each of these groups; these have been discussed in detail previously.20,21,The disability weights used in GBD 1996 were generated by health-care professionals on the basis that they would have knowledge of a diverse set of health states and would be able to make comparative judgments.Individuals in a health state have the most intimate knowledge of the reductions in function associated with that state, but they will be less able to make comparisons with other health states.Such individuals might have adapted to their health loss, and therefore not appreciate the extent to which their functioning has been impaired relative to a completely healthy individual.Our comparative risk assessment of burden attributable to drug use is conservative because a range of potential health outcomes of drug use were not included.First, although unintentional injuries and homicide are often among the most prevalent causes of death among people who are dependent upon opioids, cocaine, and amphetamines,48 they have not yet been included as outcomes of these forms of drug dependence.We aim to present evidence that will warrant their inclusion in future iterations of GBD.Second, the evidence for a causal association between drug use and a range of possible outcomes was weak.However, the inclusion of these outcomes might be reconsidered in future iterations of GBD, since a 2016 WHO monograph63 on the health effects of cannabis use reported increasing evidence for a causal link between cannabis use and road traffic accidents.Third, many putative consequences of drug use exist, for which we did not attempt to quantify the magnitude of possible associations because the level of evidence was too low.2,These consequences included a range of health outcomes that are increased among people with drug dependence, such as mental disorders, myocardial infarction, and cardiovascular pathology.64,Well designed prospective studies are needed to estimate the risks of these consequences of drug use while controlling for confounding factors.Finally, in GBD, the concept of disability is intended to only capture the health loss of an individual.Thus, disability does not include social or other impacts on non-drug users such as the family or the social and economic consequences of mental and substance use disorders.To that extent, our estimates of disease burden due to alcohol and drugs are partial estimates of the adverse impact of substance use on society.Alcohol and drug use cause substantial disease burden globally, and the composition and extent of this burden varies substantially between countries, and is strongly associated with social development.Existing interventions that are known to reduce the varied causes of burden exist.These interventions need to be scaled up, which remains a challenge even in high-resource settings.For more on visualisation tools see http://www.healthdata.org/gbd,For more on data input tools see http://ghdx.healthdata.org/gbd-2016/data-input-sources and https://vizhub.healthdata.org/epi/,For more on the human development index see http://hdr.undp.org/en/content/human-development-index-hdi,All GBD 2016 analyses adhered to the Guidelines for Accurate and Transparent Health Estimates Reporting.10,A suite of visualisation tools is available to explore GBD data inputs and outputs.Full details of the overall methods used to assess disorder prevalence,11 burden of disorders,11 mortality,12 overall substance use disorder burden, calculated by the equation DALYs = YLLs + YLDs,13 and burden attributable to risk factors, including alcohol and drug use14 have been described previously.Disease burden was quantified by geography, for 23 age groups, both sexes, and six timepoints between 1990 and 2016.The GBD 2016 geographical hierarchy included 775 total geographies within 195 countries and territories, within 21 regions and seven super-regions.Comprehensive methods used in GBD 2016 for estimating YLDs, YLLs, and DALYs have been described previously, and the process used to estimate prevalence-based YLDs, YLLS, and DALYs is described in the appendix.Substance use disorders were defined according to DSM-IV4 and ICD-10.3,Six substance use disorders were included: opioid dependence, cocaine dependence, amphetamine dependence, cannabis dependence, alcohol dependence, and fetal alcohol syndrome.15,A residual category of other drug use disorders was also included.Input data on causes of death were obtained from vital registration, verbal autopsy, and surveillance databases from 1980 to 2016.16,Normative life tables were generated using data on the lowest death rates for each age group within geographies with total populations of more than 5 million.YLLs were then estimated by multiplying cause-specific deaths at a specific age by the standard life expectancy at that age obtained from normative life tables.Full details of all the modelling processes have been published previously.16,The Cause of Death Ensemble model strategy was used to model cause of death data by location, age, sex, and year for each substance use disorder.12,The CODEm outputs for all GBD causes were then rescaled to establish estimates consistent with all-cause mortality levels for each age, sex, year, and location.Deaths coded as alcohol and drug poisonings were attributed to the relevant alcohol and drug use disorders.We did systematic reviews of the literature to compile data on the prevalence, incidence, remission, and excess mortality associated with each disorder.We searched PubMed, EMBASE, and PsycINFO databases and grey literature sources in accordance with the Preferred Reporting Items for Systematic Reviews guidelines.17,For each epidemiological parameter, eligible estimates were derived from studies published since 1980.Datapoints are summarised for each disorder in the appendix and the data input tools are available elsewhere.The epidemiological data obtained from our systematic literature reviews were modelled in DisMod-MR 2.1,18 a Bayesian meta-regression tool that pools datapoints from different sources and adjusts for known sources of variability to produce internally consistent estimates of incidence, prevalence, remission, and excess mortality.Estimates are generated for locations where raw data are unavailable using the modelled output from surrounding regions.According to the GBD protocol, an uncertain estimate is preferable to no estimate, even when data are sparse or not available, because no estimate would result in no health loss from that condition in the location being estimated.DisMod-MR 2.1 also uses both study-level and location-level covariates to better inform the epidemiological models.Study-level covariates adjust suboptimal data toward those considered to be the gold standard, whereas location-level covariates help DisMod-MR 2.1 better predict disorder distribution.DisMod-MR 2.1 analyses ran in a sequence of estimations at each level of the GBD geographic hierarchy with consistency imposed between estimates at each level.Although our inclusion criteria ensured minimum study quality, considerable variability was identified between studies that reflected the use of different methods and analyses.19,Data availability varied across disorders and regions.Uncertainty in both the epidemiological data and in modelling was propagated to the final prevalence output used to calculate YLDs in addition to the uncertainty from fixed effects and random effects for country and regions.18,We used disability weights to quantify the severity of the health loss associated with a particular disease or injury, and disability weights for each injury or disease were applied to the prevalence of that condition.In this study, we used disability weights generated by the general public, on the basis of the argument that their views are relevant in comparative assessments that inform public policy.20,21,In GBD 2016, disability weights were obtained from population surveys in various different countries and from an open-access survey available in multiple languages in which lay participants were presented with pairs of short non-clinical descriptions of the health states of two hypothetical individuals and asked to rate which they considered healthier.20,22,Participant responses were scored on a scale ranging from 0 to 1 using a series of questions comparing the benefits of lifesaving and disease-prevention programmes for a number of health states.The pair-wise comparisons showed the relative position of health states to each other, and this additional step in the analysis was necessary to anchor those relative positions as values on a 0 to 1 scale.Disability weights were generated for all sequelae of diseases and injuries included in GBD.Further details regarding the calculation of disability weights have been published previously.20,22,Each country-specific, age-specific, sex-specific, and year-specific prevalence derived by DisMod-MR 2.1 was multiplied by a disorder-specific disability weight to estimate YLDs.For each substance use disorder, we estimated the proportion of cases that were asymptomatic using data from the US National Epidemiological Survey on Alcohol and Related Conditions for the time periods 2000–01 and 2004–05,23 and the Australian Comorbidity and Trauma Study for opioid dependence.24,25,These proportions were used to calculate a mean disability weight for each disorder across the different levels of severity in which asymptomatic cases were assigned a disability weight of 0.For all substance use disorders in GBD 2016, we removed the proportion of diagnosed individuals who reported no additional disability at the time of the survey.The remaining proportion of individuals represented so-called asymptomatic cases.11,The burden due to each cause in the GBD study was estimated separately.Since individuals might have more than one disease or injury at a specific timepoint, a simulation method was used to adjust for presence of comorbidity.The co-occurrence of different diseases and injuries was estimated by simulating populations of 40 000 individuals in each GBD location stratified by age, sex, and year.Hypothetical individuals within each population were exposed to the independent probability of having any combination of sequelae included in GBD 2016.The probability of being exposed to a sequela corresponded to its prevalence in the population.A combined disability weight to account for individuals with more than one condition was calculated by combining the disability weights, with the health loss associated with two disability weights multiplied together and then a weighted average of each constituent disability weight was calculated.The so-called GBD comorbidity correction was the difference between the average disability weight estimated for individuals with one sequela and the combined disability weight estimated for those with multiple sequelae.The average comorbidity correction estimated for each sequela was applied to the respective location-specific, age-specific, sex-specific, and year-specific YLDs.Although the probability of two sequelae co-occurring might be dependent, insufficient data were available to confidently estimate all dependent probabilities by age and sex in the GBD study.Thus, all probabilities of comorbidity were modelled as independent.We estimated burden by aggregating substance-use-disorder-specific epidemiological data and disability weights to calculate prevalent YLDs; multiplying substance-use-disorder-specific estimates of mortality by standard life expectancy at the age of death to calculate YLLs; summing YLDs and YLLs to generate substance use disorder-specific DALYs; and estimating YLDs, YLLs, and DALYs attributable to alcohol and drug use as a risk factor for other health outcomes.DALYs were derived as the sum of YLD and YLLs for each disorder, location, age group, sex, and year.Age-standardised prevalence, deaths, YLLs, YLDs, and DALYs were estimated using the GBD world population age standard."Uncertainty was derived for all estimates by simulating 1000 draws from each estimate's posterior distribution, to calculate uncertainty arising from primary inputs, sample sizes in the data collected, adjustments made to the data during modelling, and model estimation.For YLLs, uncertainty estimates reflected uncertainty due to study sample sizes, adjustments made to the all-cause mortality data, and model estimation.
Background: Alcohol and drug use can have negative consequences on the health, economy, productivity, and social aspects of communities. We aimed to use data from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2016 to calculate global and regional estimates of the prevalence of alcohol, amphetamine, cannabis, cocaine, and opioid dependence, and to estimate global disease burden attributable to alcohol and drug use between 1990 and 2016, and for 195 countries and territories within 21 regions, and within seven super-regions. We also aimed to examine the association between disease burden and Socio-demographic Index (SDI) quintiles. Methods: We searched PubMed, EMBASE, and PsycINFO databases for original epidemiological studies on alcohol and drug use published between Jan 1, 1980, and Sept 7, 2016, with out language restrictions, and used DisMod-MR 2.1, a Bayesian meta-regression tool, to estimate population-level prevalence of substance use disorders. We combined these estimates with disability weights to calculate years of life lived with disability (YLDs), years of life lost (YLLs), and disability-adjusted life-years (DALYs) for 1990–2016. We also used a comparative assessment approach to estimate burden attributable to alcohol and drug use as risk factors for other health outcomes. Findings: Globally, alcohol use disorders were the most prevalent of all substance use disorders, with 100.4 million estimated cases in 2016 (age-standardised prevalence 1320.8 cases per 100 000 people, 95% uncertainty interval [95% UI] 1181.2–1468.0). The most common drug use disorders were cannabis dependence (22.1 million cases; age-standardised prevalence 289.7 cases per 100 000 people, 95% UI 248.9–339.1) and opioid dependence (26.8 million cases; age-standardised prevalence 353.0 cases per 100 000 people, 309.9–405.9). Globally, in 2016, 99.2 million DALYs (95% UI 88.3–111.2) and 4.2% of all DALYs (3.7–4.6) were attributable to alcohol use, and 31.8 million DALYs (27.4–36.6) and 1.3% of all DALYs (1.2–1.5) were attributable to drug use as a risk factor. The burden of disease attributable to alcohol and drug use varied substantially across geographical locations, and much of this burden was due to the effect of substance use on other health outcomes. Contrasting patterns were observed for the association between total alcohol and drug-attributable burden and SDI: alcohol-attributable burden was highest in countries with a low SDI and middle-high middle SDI, whereas the burden due to drugs increased with higher S DI level. Interpretation: Alcohol and drug use are important contributors to global disease burden. Effective interventions should be scaled up to prevent and reduce substance use disease burden. Funding: Bill & Melinda Gates Foundation and Australian National Health and Medical Research Council.
418
Poor medication adherence and risk of relapse associated with continued cannabis use in patients with first-episode psychosis: a prospective analysis
Risk of relapse after the first episode of psychosis is high,1 constituting a substantial burden for health-care systems around the world,2,3 and this relapse affects both individuals and society at large.4,In particular, relapse during the first few years after onset of the psychotic episode is an important determinant for long-term clinical and functional outcome.5,Hence, prevention of relapse is a crucial treatment target, which in turn underscores the importance of identification of modifiable risk factors that could influence relapse.Although the multifactorial nature of relapse is well known,6 two consistently identified modifiable risk factors influencing relapse are continued cannabis use following onset of psychosis7–9 and medication non-adherence,10,11 both of which are unlikely to be the result of confounding or reverse causation.12,Despite the fact that the prevalence of post-onset cannabis use13 and medication non-adherence12 in patients with psychosis is high, understanding of the effects of this remains poor.There is poor understanding about how risk factors such as cannabis use might affect outcome in psychosis.Previous studies9,14 have shown that the effect of cannabis use on risk of relapse was reduced when medication adherence was controlled for, suggesting that cannabis use could adversely affect psychosis outcome partly by influencing adherence to antipsychotic medication.This is consistent with independent evidence from a meta-analysis15 suggesting a significant effect of continued cannabis use on adherence to antipsychotic medication in patients with psychosis, which was also confirmed by the five studies9,12,16–18 that investigated this issue subsequently.However, no study to date has systematically investigated to what extent the association between cannabis use and relapse of psychosis is mediated by non-adherence with prescribed psychotropic medication.By elucidating the mechanistic pathway from cannabis use to psychosis relapse in first episode of psychosis in terms of potential mediational processes, we might be able to help identify alternative targets for intervention that could help mitigate the harm from cannabis use.Hence, in the present study, we aimed to explore whether some of the adverse effects of continued cannabis use on risk of relapse can be explained by its association with medication adherence; whether the association between continued cannabis use and risk of relapse is only partly, but not fully, mediated by medication adherence; and whether mediation effects are also present for other relapse-related outcomes, including number of relapses, length of relapse, time until relapse occurs, and intensity of care.Evidence before this study,We searched MEDLINE databases from inception to April 12, 2017, using a combination of search terms for describing diagnosis, exposure, and outcome of interest, which retrieved 2092 articles, of which 20 were selected according to the following three criteria: investigated the relationship between cannabis use and medication adherence; the majority of the sample were taking antipsychotic medication; participants were diagnosed with schizophrenia or any psychotic disorder using standardised criteria.We have previously summarised 15 of these studies as part of a meta-analysis, and these results showed that continued cannabis use increased the risk for non-adherence to antipsychotic medications.Results of the five additional studies that had been published since the original literature search were consistent with the results of our previous meta-analysis and confirmed that cannabis users were less likely to adhere to their prescribed medication than people who did not use cannabis.Of all relevant studies, only one investigated whether the effect of cannabis use on medication adherence mediated its effects on outcome in psychosis.They used data obtained from clinical records to report that poor medication adherence mediated the adverse effect of cannabis use on non-remission at 1 year in patients with psychosis.However, not all patients continue using cannabis following the onset of psychosis and a substantial proportion stop using the drug, a factor that was not accounted for by Colizzi and colleagues.Hence, whether non-adherence to antipsychotics truly mediates the effect of continued cannabis use following the onset of psychosis and the extent of this effect is unclear.Added value of this study,The present study extends current evidence on cannabis use being associated with increased risk of relapse in psychosis by investigating how it might be exerting this effect, focusing particularly on adherence to antipsychotic medication.The study benefits from data obtained in follow-up assessments of a large sample of patients with first-episode psychosis, which allowed a more detailed assessment of cannabis use profiles and pattern of medication adherence and the consideration of potential confounders.We explored consistency of effects by using different outcome measures of relapse, including risk or number of relapses, length of relapse, time until relapse, and severity of relapse.Implications of all the available evidence,Collectively, the results of the present study and previous evidence indicate that relapse of psychosis associated with continued cannabis use might be partly mediated through non-adherence with prescribed medication.Hence, future investigations should test whether interventions aimed at improving medication adherence could partly help mitigate the adverse effects of cannabis use on outcome in psychosis.All patients included in this prospective analysis were recruited from four different adult inpatient and outpatient units of the South London and Maudsley Mental Health National Health Service Foundation Trust in Lambeth, Southwark, Lewisham, and Croydon as part of a follow-up study aiming to investigate the role of cannabis use within the first 2 years after onset of psychosis.Patients had a clinical diagnosis of first-episode non-organic psychosis19 and were aged between 18 and 65 years when referred to local psychiatric services in south London, UK.We have previously reported on methods for assessment of patients and data acquisition.9,12,This study was granted ethical approval by South London & Maudsley NHS Foundation Trust and the Institute of Psychiatry Local Research Ethics Committee.All patients included in the study gave written informed consent.We obtained information regarding use of services, including number, duration, and legal status of inpatient admissions and referral to crisis intervention team or standard treatment by a community mental health team from electronic patient records, using the WHO Life Chart Schedule.20,Age of onset of psychosis was defined as the age on the date of referral to local psychiatric services for a first episode of psychosis.Our main outcome variable of interest was risk of relapse, which we defined as admission to a psychiatric inpatient unit owing to exacerbation of psychotic symptoms within 2 years following first presentation to psychiatric services.This outcome has been linked to both cannabis use and medication adherence in those with first episode of psychosis.9,12, "Other relapse-related outcome measures included the number of relapses; the length of relapse; the time to first relapse; the care intensity at follow-up.We assessed cannabis use as a predictor variable using a modified version of the Cannabis Experience Questionnaire,9 obtaining data on cannabis use over the first 2 years following onset of psychosis.In line with previous work,12 cannabis users were classified on the basis of their pattern of continuation of use after onset, categorising them into different categories.The cannabis use variable was coded as ordered categorical.We assessed medication adherence as a mediator variable within the first 2 years after onset by use of the Life Chart Schedule.20,Similar to previous reports,12 the variable was classified on the basis of information on prescription and ratings of adherence.Other factors that have previously been reported to be associated with relapse were also included in the model as covariates, including other illicit drug use,21,22 ethnic origin,4,9 and care intensity at psychosis onset as an index of illness severity when presenting with the first episode.9,23,As done in previous studies,12 data from the CEQmv and WHO Life Chart Schedule20 were used to derive the following variables.Other drug use was defined as the use of illicit drugs other than cannabis within the first 2 years after onset.This variable was coded as a categorical variable."Care intensities at onset and follow-up were computed by rating each patient's intensity of service use at onset or follow-up, respectively.We created structural equation modelling analyses represented by path diagrams to measure the mediating effect of medication adherence on the association between cannabis use and relapse.We estimated standardised direct, indirect, and total effects using R and its package Lavaan.24,We estimated bias-corrected 95% CIs using 1000 bootstrap samples.The initial simple models estimated path coefficients for continued cannabis use as a predictor for medication adherence, continued cannabis use as a predictor for relapse and relapse-related outcomes, and medication adherence as a predictor for relapse and relapse-related outcomes.As part of the mediation analysis, a direct effect refers to the standardised path coefficient between continued cannabis use and risk of relapse, and an indirect effect to the product of the standardised path coefficient between path A and path B.The total effect of cannabis use on risk of relapse is the sum of direct and indirect effects.Mediation occurred if the indirect effect was significant.Structural equations for each endogenous variable in the pathway model were adjusted for the potential confounding effects of ethnic origin, other illicit drug use, and illness severity at onset as indexed by the level of care intensity at onset.We aimed to further explore an alternative reverse mediation model to compare with the proposed mediation model.In this reverse mediation model for risk of relapse and related outcomes, continued cannabis use was treated as the mediator variable and medication adherence as the independent variable.It is suggested that the predicted mediation model would be more convincing if the reverse model identifies only non-significant indirect paths.25,The views expressed are those of the authors and not necessarily those of the NHS, the National Institute of Health Research, or the Department of Health.The funders had no role in the study design, data collection, data analysis, data interpretation, or writing of the report.The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.All authors have approved the final version of the paper.397 patients who presented with their first episode of psychosis between April 12, 2002, and July 26, 2013, were approached for follow-up and had an assessment up until September, 2015.Of the 397 patients, 133 refused to take part in this study and 19 could not be included because of missing data.We followed up 245 patients with first-episode psychosis for 2 years from onset.91 of 245 patients with first-episode psychosis had a relapse over the 2 years after onset of psychosis.Most patients reported regular or irregular adherence with the prescribed medication, whereas only a small subset of patients reported non-compliance.Although most patients were classified as not cannabis users following the onset of psychosis, the remaining patients were classified either as intermittent cannabis users or continued cannabis users.Comparing those who relapsed to those who did not relapse revealed that the relapsing patient group was more likely to be classified as continued cannabis users and as non-adherent or irregularly adherent with the prescribed medication.With regard to the other demographic and clinical characteristics, the relapsing group was more likely to be of non-white ethnic origin and appeared to have used other illicit drugs more regularly following the onset of first-episode psychosis.The simple path models identified the following associations between the variables of interest: a significant positive association between the level of cannabis use continuation and the risk of relapse, number of relapses, length of relapse, care intensity index at follow-up, and time until a relapse occurred.The proposed mediator, medication adherence, was linked to both the cannabis use variable and the relapse outcome, including risk of relapse, number of relapses, time until a relapse occurred, and care intensity index, but not length of relapse, suggesting that poor adherence was predictive of worse outcome in the 2 years following onset of psychosis.Adjusting all models for ethnic origin, other illicit drug use, and intensity of care at onset, the structural equation modelling analyses are reported in table 3.They revealed that the association between continued cannabis use and risk of relapse was mediated by medication adherence, suggesting a partial but not full mediation by medication adherence of the effect of cannabis use on risk of relapse.For risk of relapse, the model explained 25% of the variance, so 75% is not explained by this model.Explained variance is the extent to which a statistical model accounts for the variation in the dependent variable.The direct effect is the path between continued cannabis use and risk of relapse, and the indirect effect is the product of the path coefficients of the effect of cannabis use and medication adherence and the effect of medication adherence on risk of relapse.A similar effect was seen for care intensity index at follow-up, for which medication adherence mediated 19·7% of the effect of continued cannabis use, again implicating partial mediation based on significant indirect and direct effects.A larger proportion of the effect of continued cannabis use on the number of relapses of psychosis and time until relapse was mediated by medication adherence.No significant mediation effect was present for length of relapse.The adjusted models explained moderate amounts of variance for outcomes defined as risk of relapse, number of relapses, length of relapse, time until relapse, and care intensity index.Testing of the alternative models with the cannabis use variable as the proposed mediator indicated that continued cannabis use did not mediate the association between medication non-adherence and the different relapse outcomes after controlling for covariates.We tested two different theoretical models, which were a mediation model that tested whether medication adherence mediated the effect of cannabis use on outcome and a reverse arrow model that tested whether cannabis use mediated the effect of medication adherence on outcome.25,Studies have used this approach to evaluate which of the proposed mediators is more valid.In those models, medication adherence had a significant direct effect on risk of relapse, number of relapses, care intensity index at follow-up, and time until a relapse occurred, indicating that cannabis use did not fully confound the effects of medication adherence on outcome.There were no indirect effects for risk of relapse, number of relapses, time until a relapse occurred, and care intensity index at follow-up, which further suggested that cannabis use did not mediate the effects of medication adherence on these relapse outcomes.For length of relapse, there were no indirect or direct effects for medication adherence.To the best of our knowledge, this is the first study that examines medication adherence as a mediator of the association between continued cannabis use following illness onset and relapse, as indexed by admission to hospital, in patients with first-episode psychosis.The adverse effects of continued cannabis use on risk of relapse were partly but not fully mediated by its association with non-adherence with prescribed antipsychotic medication.More specifically, medication non-adherence mediated the effect of continued cannabis use on risk of relapse, number of relapses, time until a relapse occurred, and care intensity index at follow-up.Medication non-adherence did not mediate the effect of continued cannabis use on length of relapse of psychosis.Our results not only indicate that those patients who continue to use cannabis following onset of their psychotic illness are also more likely to not take medications prescribed for their psychosis but also that this effect can partly explain why patients with first-episode psychosis who continue to use cannabis often suffer from a relapsing form of the illness.7,26,We8,9 and others7 have shown that cannabis use, especially continued use after onset of psychosis, is associated with relapse of psychosis resulting in admission to hospital and that this effect is more likely than not to be a causal association.12,This is consistent with other evidence implicating worse outcomes in patients with first-episode psychosis who continued to use cannabis when compared with those who stop using the substance.27,Here, we extend this previous work to show that the adverse effect of continued cannabis use on outcome in early psychosis is partly mediated by an effect on adherence with medication treatment.These findings are consistent with studies that identified an association between cannabis use and medication adherence,8,9,15,28 as well as between medication adherence and increased risk of relapse of psychosis.10,11,We have reported8 that failure of treatment with antipsychotic medication as indexed by the number of unique antipsychotic prescriptions could partly mediate the adverse effect of cannabis use on subsequent risk of relapse in first-episode psychosis.Although a change of antipsychotic medication could reflect a clinical judgment of failed treatment, several separate considerations either alone or in combination could lead to such a judgment, including one of treatment resistance, poor tolerability, or non-adherence to a specific antipsychotic.Until now, it has not been known which of these factors might explain how cannabis use could increase the risk of relapse.Results from the present study clearly point toward a mediating influence of poor medication adherence.Whether treatment resistance or poor tolerability also mediate some of the effects of cannabis use on relapse of psychosis is yet to be tested.Furthermore, other factors, such as depressive symptoms29 or cognitive function30 that were not systematically investigated in this study could also have influenced the association between cannabis use and risk of relapse.Overall, our results suggest that although efforts should no doubt continue to develop more effective interventions to help patients with psychosis to reduce their cannabis use—eg, similar to those cannabis-focused treatment programmes that are currently under assessment,31 another potential approach to mitigating the harm from cannabis use might lie in ensuring better adherence of patients to their prescribed medication.It is worth noting that despite the identified mediation effect, a considerable proportion of the variance in the risk of relapse and related outcomes remains unexplained, varying between 7% and 25% depending on the specific outcome.Future studies including much larger samples are needed to consider other risk factors of interest as well as more complex model pathways to address the issue of unexplained variance in relapse outcome.In this context, it should be pointed out that the identified associations could also be bidirectional.It is worth noting that as the present study was an observational study, temporal ambiguity between the mediator and predictor variable as well as unmeasured confounders could have biased our results.Nevertheless, to partly address this limitation of absence of experimental data, we compared the proposed mediation model with an alternative path model with reversed arrows but the results were not supportive of alternative path models that included cannabis use as a mediator of the associations between medication adherence and relapse outcome.Although other limitations of this study might relate to the nature of the retrospective assessment of cannabis use and medication adherence, and the inclusion of a selective subset of inner city patients with first-episode psychosis who were at least 18 years old, those issues are unlikely to have affected the results.We did not consider those who started using cannabis after the onset of psychosis but had no history of premorbid regular use as a separate group, since only three participants belonged to this category.How continued cannabis use might have resulted in poor adherence to medications in psychosis patients is unclear.Although it is possible that increased severity of psychosis,7 and consequently, impaired insight or memory32 as a result of continued cannabis use might explain poor adherence, this possibility was not investigated in the present study and warrants investigation in the future.Our results suggest that up to a third of the adverse effect of cannabis use on outcome in first-episode psychosis could be mediated through its effect on medication adherence, suggesting that interventions aimed at improving medication adherence might partly help mitigate the adverse effects of cannabis use on outcome in psychosis.
Background Cannabis use following the onset of first-episode psychosis has been linked to both increased risk of relapse and non-adherence with antipsychotic medication. Whether poor outcome associated with cannabis use is mediated through an adverse effect of cannabis on medication adherence is unclear. Methods In a prospective analysis of data acquired from four different adult inpatient and outpatient units of the South London and Maudsley Mental Health National Health Service Foundation Trust in London, UK, 245 patients were followed up for 2 years from the onset of first-episode psychosis. Cannabis use after onset of psychosis was assessed by self-reports in face-to-face follow-up interviews. Relapse data were collected from clinical notes using the WHO Life Chart Schedule. This measure was also used to assess medication adherence on the basis of both face-to-face interviews and clinical notes. Patients were included if they had a diagnosis of first-episode non-organic or affective psychosis according to ICD-10 criteria, and were aged between 18 and 65 years when referred to local psychiatric services. We used structural equation modelling analysis to estimate whether medication adherence partly mediated the effects of continued cannabis use on risk of relapse. The primary outcome variable was relapse, defined as admission to a psychiatric inpatient unit after exacerbation of symptoms within 2 years of first presentation to psychiatric services. Information on cannabis use over the first 2 years after onset of psychosis was investigated as a predictor variable for relapse. Medication adherence was assessed as a mediator variable on the basis of clinical records and self-report data. Study researchers (TS, NP, EK, and EF) rated the adherence. Findings 397 patients who presented with their first episode of psychosis between April 12, 2002, and July 26, 2013 had a follow-up assessment until September, 2015. Of the 397 patients approached for followed up, 133 refused to take part in this study and 19 could not be included because of missing data. 91 (37%) of 245 patients with first-episode psychosis had a relapse over the 2 years of follow-up. Continued cannabis use predicted poor outcome, including risk of relapse, number of relapses, length of relapse, and care intensity at follow-up. In controlled structural equation modelling analyses, medication adherence partly mediated the effect of continued cannabis use on outcome, including risk of relapse (proportion mediated=26%, β indirect effects =0.08, 95% CI 0.004 to 0.16), number of relapses (36%, β indirect effects =0.07, 0.003 to 0.14), time until relapse (28%, β indirect effects =–0.26, −0.53 to 0.001) and care intensity (20%, β indirect effects =0.06, 0.004 to 0.11) but not length of relapse (6%, β indirect effects =0.03, −0.03 to 0.09). The adjusted models explained moderate amounts of variance for outcomes defined as risk of relapse (R 2 =0.25), number of relapses (R 2 =0.21), length of relapse (R 2 =0.07), time until relapse (R 2 =0.08), and care intensity index (R 2 =0.15). Interpretation Between 20% and 36% of the adverse effects of continued cannabis use on outcome in psychosis might be mediated through the effects of cannabis use on medication adherence. Interventions directed at medication adherence could partly help mitigate the harm from cannabis use in psychosis. Funding This study is funded by the National Institute of Health Research (NIHR) Clinician Scientist award.
419
Gray matter contamination in arterial spin labeling white matter perfusion measurements in patients with dementia
White matter perfusion measured with arterial spin labeling is a potential in vivo micro-vascular parameter to investigate the interplay between normal aging and degenerative and vascular pathology, such as small vessel disease.Data on WM perfusion are relatively scarce, because ASL has long been considered unsuitable to measure stable WM cerebral blood flow.Although recent technical advances have enabled these measurements, still a relatively long scan time is required to capture single voxel WM CBF.Due to the often limited available scan time, clinical investigators either ignore WM perfusion or use it as a reference value.Fortunately, voxel-wise comparison of WM perfusion is not always required.It may suffice to average the signal from all WM voxels to provide a single value for the hemodynamic status of the total WM region of interest.Perfusion signal from such a ROI has recently been shown to be reproducible in elderly patients with dementia.However, contamination of GM signal into WM voxels may seriously affect WM perfusion measurements, because the contrast between GM and WM CBF is large.Furthermore, changes and correlations are mainly found in GM CBF, while the WM CBF often remains relatively stable.Therefore, even a fraction of GM contamination may distort WM CBF measurements and its possible clinical correlations.Main sources of GM contamination are the point spread function of the ASL imaging readout module and partial volume voxels.Both have a large effect in ASL due to its low imaging resolution, which is required to compensate for its low signal-to-noise ratio.Currently, PV voxels are excluded based on the segmentation of a high resolution anatomical scan.However, simulations indicate that WM voxels without PV may still experience GM contamination due to the PSF.Therefore, to correctly interpret perfusion signal averaged from a WM ROI, it is essential to investigate the spatial extent of GM contamination.Can perfusion signal originating from the WM be distinguished from signal blurred from the GM?,With this knowledge a WM ROI could be constructed that experiences minimal GM contamination without excluding too many WM voxels.Constructing a WM ROI may be especially challenging in the elderly, because of the decreased T1 and ASL GM-WM contrast and WMH associated with aging.The current study investigates the spatial extent of GM contamination in elderly patients with dementia.Patient characteristics are summarized in Table 1.The mean GM CBF was 36.8 ± 8.5 mL/100 g/min.Outward GM contamination was mainly observed in the first three voxels, whereas distances − 4 to − 7 voxels showed very low signal."The inward decrease of WM signal was smaller than the outward signal decrease.In the PV analysis, the WM CBF and GM-WM ratio seemed to show decreasing GM contamination with increasing tissue probabilities.A comparison of the left and right graphs of Fig. 2a–b shows the relation of GM contamination with the inclusion of voxels containing 80 – 100% WM PV.Mean CBF and GM-WM CBF ratio of tissue probabilities 80 to 99% can be compared with distances 1 to 3 voxels.The WM CBF and GM-WM CBF ratio at 100% WM tissue probability can be compared with distance 4 voxels.At higher inward distances the mean CBF decreased further and reached lower values than with the exclusion of all PV voxels.Similarly, at these higher distances the GM-WM CBF ratio reached higher values than with the exclusion of all voxels containing < 100% WM PV.Fig. 3 shows the difference between a WM mask without these voxels and a WM mask with these voxels but with three erosions applied.It illustrates that the exclusion of voxels containing < 100% WM PV did not remove all subcortical WM voxels whereas it did remove voxels within the deep WM.The findings of this study are threefold.Firstly, the outward GM contamination suggests that GM contamination mainly affects the first three subcortical WM voxels and has only minor influence on deep WM signal, beyond three voxels distance from the GM.Secondly, the significant asymmetry between the inward and outward signal indicates that the detected signal within the WM voxels reflects WM perfusion signal.Finally, Fig. 3 indicates that GM contamination is not restricted to voxels that contain more than 0% GM PV.These results provide insight in the distinction of PSF from the effect of PV voxels, and show that, within a WM ROI, WM signal can be separated from the contamination of GM signal.Using probabilistic tissue segmentation, generally two different methods can be applied to avoid GM contamination.The tissue probability threshold can be set high to exclude all voxels containing less than 100% WM PV.Alternatively, it can be set relatively low in combination with a number of erosions applied on the outside of the mask.Here, we have compared the two methods.With an increase in excluded voxels that contain GM or CSF PV, we observed decreasing GM contamination, a trend that is in agreement with previous findings.As the WM CBF and GM-WM CBF ratio at 100% tissue probability were comparable to CBF and GM-WM CBF ratio at a distance of 4 voxels, it appears that it would suffice to exclude all voxels containing less than 100% WM PV.However, Fig. 3a shows that 100% WM voxels also resided within the subcortical WM, where GM contamination was observed.In addition, the segmentation algorithm removed voxels within the deep WM, where no GM contamination was observed.The removal of deep WM voxels is probably the result of segmentation errors due to WM hyperintensities or CSF PV voxels.Although CSF contamination decreases the measured WM CBF, this effect includes only noise and does not bias clinical correlations — as is the case for GM contamination.Therefore, we conclude that the application of erosion on the outer boundary of a WM mask is a more effective way to avoid GM contamination compared to the exclusion of voxels containing less than 100% WM PV.The GM-WM ratio has been frequently used to compare perfusion results independent from global quantification differences.Nevertheless, discrepancies exist between literature values of this ratio, even within modalities.Where some authors have reported ratios between 2 and 3, others reported ratios between 4 and 6.Whereas studies with the highest values were focused on deep WM or used methods that were less sensitive to GM contamination, studies with lower values seem to have employed a larger ROI or lower imaging resolution.Our ratios in the deep WM are within the range of the first whereas our ratios in subcortical WM are more comparable to the latter.In addition, the ratios in subcortical WM are comparable to those obtained in the PV analysis.This adds to the point that the exclusion of voxels containing less than 100% WM PV may not suffice to avoid GM contamination.Our ratios in deep WM, on the other hand, are still slightly lower than previously reported values.This may be attributable to aging or WMH.Alternatively, these ratios may depend on quantification differences between GM and WM CBF, such as the T1 relaxation time of tissue, blood–brain partition coefficient or tissue arrival times.In the current study, we aimed to visualize the distance analysis in CBF units without influencing our results by differences in CBF quantification.Therefore, an identical model was applied for the quantification of GM and WM CBF and the label was assumed to remain in the vascular compartment.This assumption may especially be valid in the elderly, because of their prolonged transit times.Moreover, such a simple model eliminates PV effects introduced by quantification based on T1 segmentation, due to the possibility of registration mismatches.Alternatively, tissue probability maps can be acquired using the same ASL readout module, which enables separate GM- and WM-quantification that is not affected by registration mismatches.In the current study, these mismatches may be increased by echo-planar imaging distortions in regions that are close to air-tissue transitions, which are predominantly GM areas.This highlights the importance of proper registration between the T1 and the ASL scan.It should be acknowledged that the design of the current analysis is based on segmentations of an anatomical 3D T1 scan, and assumes homogeneous perfusion values across all voxels with the same distance from the GM-WM boundary.This assumption is required to average multiple voxels for sufficient SNR.Whether or not perfusion is homogeneous across WM is currently unknown.On the other hand, it is well known that transit times differ within the WM.This heterogeneity has probably contributed to the continuing CBF decrease from distances 4 to 7 voxels, where no GM contamination is expected.Alternatively, this may be caused by CBF decreasing lesions, such as WM hyperintensities, or CSF contamination.Outside the brain, the measured signal may not entirely be dependent upon the PSF.Factors that may have contributed to the signal found outside the GM include extra-cranial vessels, perfusion of the skin and motion artifacts.The heterogeneity of acquisition details that determine the PSF, such as the ASL readout resolution, readout time or T2* blurring, may limit the extrapolation of the present results to other studies.One previous study simulated the effect of PSF in a single large central WM voxel on multiple spatial resolutions, assuming a GM and WM CBF of 80 and 0 mL/100 g/min, respectively.Whereas on a low isotropic resolution such as 12.5 mm a contamination of 10 mL/100 g/min could be measured in the central WM, on an isotropic resolution of 3.1 mm only 0.08 mL/100 g/min GM contamination was left.This simulation is in line with the present results, which demonstrate that perfusion measured in deep WM contains only minor GM contamination.Furthermore, the PSF differs between 2D and 3D readouts.The current distance analysis was restricted to a single slice to compare the 2D in-plane PSF versus the effect of PV voxels.This is a valid comparison for 2D readout modules, since they have no PSF in the through-plane direction — except for crosstalk from slice profile, which is negligible in slices as thick as 7 mm.Although 3D readouts exhibit increased SNR and improved background suppression allowing for higher spatial resolution, they experience increased GM contamination due to their wider 3D PSF — especially in the through-plane direction.Even though methods exist that numerically correct this GM-WM contamination, a 2D readout module can be preferred when uncontaminated WM CBF measurements are more important than spatial or temporal SNR.To summarize, these data illustrate that, using pseudo-continuous ASL, WM perfusion signal can be distinguished from GM contamination within clinically feasible scan time in patients with cognitive impairment.Because of the PSF, GM contamination is not restricted to PV voxels and it seems necessary to apply erosion to remove subcortical WM voxels.It is expected that this method would only work in some slices, as for the majority of slices too few or no WM voxels will be left after 3 erosions.Whether this is sufficient for clinical studies should be clarified in further research.These results should be taken into account when exploring the use of WM perfusion as micro-vascular biomarker.41 patients years) presenting to an outpatient memory clinic were included in this study.Main inclusion criteria were age higher than 18 years and score on the mini-mental state examination equal to or higher than 20.Main exclusion criteria were history of transient ischemic attack or stroke in the last two years or with cognitive decline within three months after the event, major depressive disorder, psychosis or schizophrenia, alcohol abuse, brain tumor, and epilepsy.All patients provided written informed consent and the study was approved by the VU University Medical Center and Academic Medical Center ethical review boards."Of the 41 enrolled participants, 18 fulfilled criteria for mild cognitive impairment and 23 fulfilled criteria for probable Alzheimer's Disease or mixed dementia.All imaging was performed on a 3.0 T Intera with a SENSE-8-channel head coil and body coil transmission."To restrict motion the subjects' head was stabilized with foamed material inside the head coils.An isotropic 1 mm 3D T1 weighted scan and 2D FLAIR scan with 3 mm slice thickness were collected using a routine clinical protocol.Added to this protocol was a gradient echo single shot echo-planar imaging pseudo-continuous ASL sequence with the following imaging parameters: resolution, 3 × 3 × 7 mm3; FOV, 240 × 240 mm2; 17 continuous axial slices; TE/TR, 14/4000 ms; flip angle, 90°; SENSE, 2.5; labeling duration, 1650 ms; post-labeling delay, 1525 ms. Slices were acquired in sequential ascending order.30 label and control pairs were acquired, resulting in a total scan time of 4 min.Background suppression was implemented with two inversion pulses 1680 and 2830 ms after a pre-labeling saturation pulse.The labeling plane was positioned parallel and 9 cm caudal to the center of the imaging volume.For descriptive purposes of the presence of small vessel disease, the Fazekas WM hyperintensity severity scale and four-point global cortical atrophy score were assessed by a trained rater, blinded to the clinical information.Two distance maps were constructed to compare the extent of inward and outward GM contamination.This method enables the comparison between perfusion signal measured in the WM to signal measured outside the brain.Outside the brain, where air or tissue types such as cerebrospinal fluid, meninges, bone and skin are located, no perfusion signal is expected except from outward GM contamination.This analysis was carried out in 2D and restricted to a single transversal slice located 2 slices superior to the basal ganglia.This slice contains a relatively large area of WM, has no central GM and does not experience much distortion or signal dropout as frequently observed anterior in echo-planar imaging.The procedures of the distance analysis are stepwise listed here, and visualized in Fig. 1.The WM probability map was converted into a WM mask, including tissue probabilities > 10%.This low probability threshold avoids the exclusion of WM hyperintensity voxels, which are frequently misclassified as GM voxels.Subsequently, the GM probability map was converted into a GM mask, including tissue probabilities > 90%, which is complementary to the WM mask at the GM/WM boundary.Any remaining regions inside the WM or GM masks were masked as well, such that erosions or dilations affected the outer borders of the masks only.Erosions were applied to the WM mask and dilations to the GM mask, using a cross structural element with radius 1.Inward and outward city-block geodesic distance maps were created by labeling each voxel for number of erosions required to remove this voxel from the WM mask or for the number of dilations required to add this voxel to the GM mask.Consequently, the resulting distance maps show for each WM voxel its shortest distance to the outer border of the WM mask and for each voxel outside the brain its shortest distance to the outer border of the GM mask.Since the in-plane voxel size is 3 × 3 mm, a distance of 1 voxel presents a distance of 3 mm.All voxels with the same distance were projected on the CBF maps, to compute the mean CBF and voxel count for each distance.To investigate the influence of PV voxels, the same WM tissue probability map as used for the distance analysis was converted to multiple binary masks with WM tissue probabilities ranging from 80% to 100% with a bin size of 1%.This range was selected, as it encloses probability thresholds that have been previously selected in WM research.These WM masks were projected on the ASL data and their mean WM CBF, GM-WM CBF ratio and voxel count were calculated.For both the distance and PV analysis, the individual mean GM CBF was defined as GM CBF.This GM CBF was also used to calculate the GM-WM ratio for the inward distances 1 to 7 voxels.
Introduction White matter (WM) perfusion measurements with arterial spin labeling can be severely contaminated by gray matter (GM) perfusion signal, especially in the elderly. The current study investigates the spatial extent of GM contamination by comparing perfusion signal measured in the WM with signal measured outside the brain. Material and methods Four minute 3T pseudo-continuous arterial spin labeling scans were performed in 41 elderly subjects with cognitive impairment. Outward and inward geodesic distance maps were created, based on dilations and erosions of GM and WM masks. For all outward and inward geodesic distances, the mean CBF was calculated and compared. Results GM contamination was mainly found in the first 3 subcortical WM voxels and had only minor influence on the deep WM signal (distances 4 to 7 voxels). Perfusion signal in the WM was significantly higher than perfusion signal outside the brain, indicating the presence of WM signal. Conclusion These findings indicate that WM perfusion signal can be measured unaffected by GM contamination in elderly patients with cognitive impairment. GM contamination can be avoided by the erosion of WM masks, removing subcortical WM voxels from the analysis. These results should be taken into account when exploring the use of WM perfusion as micro-vascular biomarker. © 2013 The Authors.
420
CZTSxSe1−x nanocrystals: Composition dependent method of preparation, morphological characterization and cyclic voltammetry data analysis
The data given in this data article are in the form of seven figures and one table.It describes detail synthesis, characterization and procedure followed for cyclic voltammetry investigation of CZTSxSe1−x nanocrystals.TEM data have been presented for highlight the morphology and size distribution.The topography has been recorded by FESEM and AFM image analysis.The EDAX spectra give the details about composition and stoichiometry.Uniform distributions of the constituent elements are seen in the elemental mapping images.The adsorption of capping agent, oleylamine has been confirmed by FTIR data.The procedure for the preparation of working electrode for the electrochemical investigation is described in detail.The Synthesis of the CZTSxSe1−x nanocrystals carried out using hot injection method suggested by Agrawal and Riha et al. .A typical synthesis setup used in present investigation is shown in Fig. 1.The sequence of addition is marked as red-arrows.Weighed amount of all the constituents) are first transferred into the three necked flask, mounted on heating mantle, having temperature controller.Upon heating to 130 °C with constant stirring, all the metal complexes dissolved and from brown transparent solution.Temperature is raised further to 225 °C) and into it freshly prepared S/Se in OLA solution was added, which leads to black color product).The electrochemical measurements on the samples have been done on CZTSxSe1−x coated gold-disk working electrode.For that, the sample was applied on the electrode by method of drop-casting.For that, the electrode was mounted vertically in a desiccator.75 μL dispersion of nanocrystals in dichloromethane was applied carefully onto the electrode surface.The drop was allowed to dry by the help of mild vacuum for 15 min.This helps solvents to evaporate quickly and form a film on the electrode surface.Adhesion of the film was found to be good and did not show any tendency to fall down or fouling in the solvent, during the measurements.From the weight of the loading, area of the electrode and density of samples, the thickness of the films were estimated and found to be in the range 58 μm.Such modified electrodes were used as prepared without any post annealing.Fig. 2 shows the FTIR spectra recorded on CZTS nanocrystals powder samples and its comparison with a neat oleylamine sample.The band at 3372 cm−1 and 3292 cm−1 in OLA sample is attributed to the asymmetric and symmetric stretching from primary NH2 group .These bands are replaced by new band at 3170 cm−1 in CZTS samples those matched with stretching vibration from the secondary amine group.This observation is explained on the basis of change in bond order during the adsorption.Similarly, the intensity of bending vibrations at 795 cm−1 for the NH2 in OLA is decreased and shifted to 802 cm−1 in CZTS sample.Intensity of the CH3 and corresponding to C–C bending vibrations at 1465 cm−1 and 722 cm−1 are decreased in case of OLA adsorbed on CZTS.In both the cases, neat OLA and OLA on CZTS show the C–H symmetric and asymmetric stretching vibrations because of the methylene and methyl group at 2923 cm−1 and 2852 cm−1.The bending vibration due to C–N at 1071 cm−1 is broadened in case of OLA with CZTS.In CZTS sample a strong band at 605 cm−1 is observed which corresponds to metal nitrogen stretching mode .Over all, from the FTIR data analysis, co-ordination of OLA with CZTS nanocrystals is confirmed.Fig. 2 shows the similar bands in case of rest all the composition.Fig. 3 shows the TEM micrographs for the CZTSxSe1−x alloy nanocrystals for varied composition.Fig. 3 and represents the low resolution and high resolution images for x=0.17 respectively, Fig. 3 and represent the low and high resolution images for x=0.42 respectively, Fig. 3 and represents the low and high resolution images for x=0.74 respectively.The TEM images for the x=1 and x=0 are shown in .For all compositions faceted morphology with polyhedron shapes having size ranging from 10 to 30 nm are observed.Fig. 4 shows the AFM images recorded on CZTSxSe1−xnanocrystals for x=0–1. x=1 x=0.74, x=0.42, x=0.17 and x=0.In the AFM images, the faceted and nearly spherical nanocrystals are observed.The average height for all the CZTSxSe1−x nanocrystals is in the range of 20 nm, suggesting the uniform topology.The size distributions are in the range of 20–30 nm.Fig. 5 shows the FESEM micrographs recorded on CZTSxSe1−x nanocrystals for x=0–1. x=1 x=0.74, x=0.42, x=0.17 and x=0.The Fig. 5 and show the mono-disperse nanocrystals spread over the silicon substrate.While Fig. 5– shows the poly disperse nature of nanocrystals.From the FESEM images, the size distributions were in the range of 10–30 nm and it matches very well with the size estimated from the TEM and AFM.Fig. 6 shows the EDAX spectrum recorded on CZTSxSe1−x nanocrystals for x=0–1. x=1 x=0.74, x=0.42, x=0.17 and x=0.From all the EDAX spectra, the compositions were estimated .It shows good agreement between the ratios of precursor used in the synthesis and compositions estimated from EDAX.The distribution of elements in all the samples are measured by EDAX based elemental mapping.It is presented in Fig. 7.The inset of Fig. 7 shows the bar chart for stoichiometry as element vs. atomic %.Mapping data underlines uniform stoichiometric distribution in the samples.Table 1 summarizes the data for optical band gaps form UV–Visible spectroscopy and electrochemical band gaps from cyclic voltammetry.From the voltammetric measurement the band edge parameters viz. valance band and conduction band edge were estimated vs. NHE and local vacuum respectively.Along with band edge parameters the crystals structure parameters like lattice constants and d-spacing calculated from X-ray diffraction pattern also presented in Table 1.
In this article, synthesis procedures of preparation of copper zinc tin sulpho-selenide (CZTSxSe1−x) alloy nanocrystals and the data acquired for the material characterization are presented. This data article is related to the research article doi: http://dx.doi.org/10.1016/j.solmat.2016.06.030 (Jadhav et al., 2016) [1]. FTIR data have been presented which helped in confirmation of adsorption of oleylamine on CZTSxSe1−x. Transmission electron microscopy (TEM), Field emission scanning electron microscopy (FESEM) and atomic force microscopy (AFM) data have been presented which have been used to reveal the morphological details of the nanocrystals. The Energy dispersive X-ray analysis (EDAX) based elemental mapping data has been presented to confirm the elemental composition of nanocrystals. Procedure for the preparation of CZTSxSe1−x based working electrode for the CV measurements have been given. The summary table for the optical, electrochemical band gaps, valance and conduction band edges as a function of composition are listed for the ready reference.
421
Eco-efficiency assessment of manufacturing carbon fiber reinforced polymers (CFRP) in aerospace industry
In both ecological and economic aspects of sustainability, there is a significant potential for developing the eco-efficiency of aerospace manufacturing process.An eco-efficiency benefit is crucial for enhancing further implementation of carbon fiber reinforced polymers in modern commercial aircrafts.However, this promising implementation of CFRP is confronted by the lack of associated studies that discuss the eco-efficiency of their manufacturing process.The increasing demand for structures made of CFRP in aerospace industry is enhancing the development of more eco-efficient manufacturing .Within eco-efficiency enhancement, both ecological and economic aspects are involved .Practically, eco-efficiency represents a major development concern in aerospace industry .On the one hand, global warming and the phenomenon of climate change has been associated with the carbon dioxide as the primarily emitted greenhouse gas .In Aerospace industry, structures made of CFRP can lead to a significant reduction in aircraft empty weight .This weight reduction can decrease the CO2 emissions up to 20% during operations .On the other hand, the economic aspect is crucial in shaping the future of CFRP implementation in aerospace industry, whereas cost reduction is a main market driver .In this work, the eco-efficiency for a case study of wing rib manufacturing made of CFRP is assessed.According to an internal investigation within the LOCOMACHS project, this rib offers up to 50% weight reduction compared to the conventional aluminum rib.Considering CFRPs, there are several studies where eco-efficiency is discussed in the different life cycle stages of these materials.A selection of associated studies is briefly reviewed in this paper.The review illuminates the intersection areas between this work and the reviewed studies.It also discusses the differences between these studies and this one in terms of the industries and manufacturing techniques.For automotive industry, many studies about the eco-efficiency of CFRP have been published.For instance, Dhingra et al. study has compared the ecological impacts of several materials including CFRP for a “cradle-to-grave” vehicle life cycle.However, neither manufacturing techniques nor unit processes within them are illustrated in that work .Considering the same industry, Kasai paper provides also a comparison between several materials, such as steel, aluminum and fiber reinforced polymer.Kasai results show the benefit of implementing FRP.However, as a consequential LCA the exact impact value is undetermined.Moreover, Kasai work covers only the ecological impact .Considering economic aspect, Das study describes the cost drivers for the manufacturing process of CFRP precisely.However, his work discusses only the economic impact of liquid compression molding in automotive industry .The eco-efficiency of CFRP manufacturing in aerospace industry has been studied as well.In their work, Shehab et al. have assessed the cost of aircraft CFRP structures.Their paper covers different cost categories for a selection of unit processes including manual layup, vacuum bagging, in-autoclave curing, and quality assurance.Hence, this work discusses a very similar case study.Even though, the results of Shehab et al. work are incomparable to the results of this work, while the structure geometries are different and no cost values are provided by their work .For ML and assembly, Choi et al. work studies the issue of design-to-cost based on existing weight and cost estimation tools.Nonetheless, Choi et al. provide no activity-based assessment for the manufacturing process but rather an estimation model for DTC and weight-to-cost.Moreover, structure specifications in their study differ from the wing rib studied in this work .Therefore, the direct comparison between Choi et al. and this work results is insufficient.However, input data such as material costs and work durations can be considered.Moreover, Haffner thesis provides an activity-based technical cost assessment of selected manufacturing techniques for various aerospace structures."Nonetheless, his thesis doesn't study the techniques of in-autoclave liquid resin infusion such as single-line-injection .Considering cost estimation based on complexity, the paper of Gutowski et al. provides cost estimation for a set of manufacturing unit processes.However, unlike our work the activity-based estimation in Gutowski et al. study is based only partially on data collection.Moreover, their study estimates mainly the time in a bottom-up approach, whereas no ecological estimation is considered .For modern aircrafts, a similar approach with highly detailed complexity consideration has been adopted by Hagnell et al.In their work, Hagnell et al. discuss the global production cost of the wing box to which the rib in our work belongs.However, in their work neither the ecological impact nor LRI technique is included .A study that assesses manufacturing eco-efficiency has been performed by Witik et al.Their study covers both eco-efficiency aspects for CFRP manufacturing using in-autoclave curing or oven tempering for LCM as well as prepreg.In their work, the manufacturing processes of a simple panel in different techniques are compared.Similar to this paper, their assessment illustrates the cost distribution over the following cost and carbon footprint drivers including materials, labor, equipment, ancillaries and energy .However, several input parameters vary between Witik et al. study and this study.Although CFRPs can be implemented in many industries the key behind their eco-efficiency impacts is affiliated with the holistic manufacturing process and not only the material itself.Therefore, it is concluded that eco-efficiency of aircraft wing rib manufacturing is only comparable with CFRP structures from other industries if the manufacturing processes are identical.Hence, the identification of these manufacturing processes, their input parameters, and their system boundaries is crucial for the assessment.This can be also clearly concluded from the significant cost differences of similar CFRP structures from different industries.For evaluation, the results of Hagnell et al., Das, Haffner, Gutowski et al., and Witik et al. are compared with the results of this paper.In order to enhance the eco-efficiency, it is essential to investigate, develop, and implement suitable decision support tools that assess the ecological and economic performance of the studied process.Generally, there are several decision support tools that can be applied.LCA is adopted in this study due to its systematic framework.Furthermore, LCCA is integrated within the framework of LCA in order to have a comprehensive eco-efficiency decision support tool .In order to have an adequate description of manufacturing process, a modeling method is required.Therefore, LCA and LCCA are performed within a representative process model that is developed by the application of business process reengineering.Thus, within this work an integrated framework of LCA and BPR is established.LCA is a support tool that provides decision-makers with ecological development guidelines.LCA aims to identify the associated ecological impact by a set of environmental performance indicators.This ecological impact can be assessed for a product as a functional unit or a process as a product system.The impact results should be gathered for defined ecological impact categories such as the climate change.Both LCA and LCCA are key tools in promoting the eco-efficiency of a product system .Based on LCA, LCCA analyzes the cost of a product system.It evaluates the economic performance within the product life cycle by a set of economic indicators.Performing LCCA guides the decision-makers to select the most cost effective alternatives and identify the required process modification .Despite the fact that LCCA is based on LCA, they are considered as diverse decision support tools, due to their various goals and perspectives.These tools provide the support to solve completely different problems .Thus, differences and similarities between these tools can be analyzed in a systematic comparison that is based on their common framework phases, as it is demonstrated in Table 1.As it is shown in Table 1, LCA is performed through an iterative framework that consists of discrete phases.The first phase in this framework includes defining the goal and scope of the assessment as well as its system boundary.The second phase is the life cycle inventory analysis in which the associated data from the assessed process are collected.Life cycle impact assessment is the third phase in this framework.LCIA is resulted from the assessment.The assessment results guide the decision-makers to the proper direct applications.However, the direct applications themselves are beyond the scope of the LCA.In the final interpretation phase, all previous phases are evaluated and the required modifications in each one are performed .Table 1 explains the different goals and scopes of LCA and LCCA.It also illuminates the miscellaneous results which are compiled from the various indicators of both sides.Elementary and intermediate flows are the measurable parameters within the data collection in LCI.On the one hand, elementary flows are defined as the relevant inputs entering or outputs leaving the entire studied product system.Elementary flow can be either energy or material, while in this study we consider labor work as a form of energy.On the other hand, intermediate flows include any product, material, or energy that flows between the unit processes within the same system .Considering the cost assessment, there are several other models which might be implemented in undertaking LCCA, such as material flow cost accounting and activity-based costing .These models have a common bottom-up approach.Technically, LCA guides the decision-makers to select the suitable direct applications and comparing different scenarios.It can be used to provide comparable non-absolute values within what is called consequential LCA as well .On the other hand, cost models are mainly implemented to determine exact values within what can be considered as attributional LCCA .Generally, in LCA a product life cycle includes all product stages from raw material to final disposal.This physical life cycle, which is also known as cradle-to-grave, can be split into several gate-to-gate stages .Even though, the definition of life cycle stages differs from economic and ecological perspectives.Considering the durability and sustainability, a holistic eco-efficiency life cycle has been established.This eco-efficiency life cycle is illustrated in Fig. 1.In Fig. 1, activities of both sustainability and durability are associated with the defined stages.Hence, this paper demonstrates the assessment of structure manufacturing from refined materials as a gate-to-gate simplified LCA, as it is unshaded within Fig. 1.In this work, the ecological impact category of climate change is assessed by determining the carbon footprint .Beside carbon footprint, manufacturing direct cost is assessed for the economic aspect.Generally, bottom-up models are implemented in realizing a gate-to-gate assessment of economic or combined eco-efficiency impacts .For such activity-based assessment, sufficient process modeling method and framework are crucial.In practice, the correlation between LCA as well as LCCA on the one side and the process modeling on the other side already exists.However, it has been concluded that a clear framework for process modeling that covers both visualization and parametrization is decisive for the eco-efficiency assessment .For activity-based eco-efficiency assessment, a modeling framework is required.This framework should enable the development of a computer-based model.In its framework, LCA contains only general guidelines for the calculation procedures and system boundary modeling .LCA framework needs to be integrated with a suitable modeling framework such as the BPR.According to Champy and Cohen, BPR is defined as a fundamental redesign and rethinking of a business process.It aims to achieve the required development by measuring the performance including cost, quality, and time .Practically, process modeling is a core dimension of LCA .Furthermore, implementing BPR facilitates the combination of both eco-efficiency aspects in one comprehensive process model.It also enables the process modification and redesign to evaluate the direct applications.In this paper, the realization of eco-efficiency model for the manufacturing process is carried out through the BPR within LCA framework.Therefore, an integrated framework that includes LCA, BPR, and manufacturing decision support has been developed, as it is shown in Fig. 2.This framework aims to facilitate handling the eco-efficiency assessment models particularly in manufacturing.In its framework, LCA includes an iterative interpretation and evaluation phase.This integrated framework consists of validation stages as they are shown with dashed lines in Fig. 2.The generated computerized model is validated through the qualification of conceptual model to reality as well as verification between both conceptual and computerized models .Furthermore, the generated decision can be validated with reality as well .Similar to the conventional frameworks, this decision support framework is applied as a continuous iterative development loop.In practice, decision support tools can be realized in the form of software based on LCA framework .Ganzheitliche Bilanz for instance, which is a German term that means holistic balance, is decision support software that is developed by the University of Stuttgart in Germany.This tool provides ecological assessment for the entire life cycle of a product.Another example is Umberto software that assesses both ecological and economic impacts.Within Umberto, manufacturing processes can be modeled and the associated elementary flows can be allocated .System for integrated environmental assessment of products is another worldwide known LCA software that covers the entire life cycle .These assessment software packages depend on universal ecological databases.These databases are continuously updated based on the results of associated assessments .These tools are able to cover the entire life cycle as well as a wide range of ecological impact categories such as climate change, human health, resources, and ecosystem quality .This paper presents the results generated by the eco-efficiency assessment model.Based on the integrated framework, EEAM is developed by the German Aerospace Center.EEAM is a bottom-up and activity-based carbon footprint and direct cost assessment model.It assesses only the manufacturing process as simplified gate-to-gate LCA .On the one side, EEAM is similar to other existing tools as an eco-efficiency decision support model that covers both ecological and economic impacts.This model is designed to handle the manufacturing of FRP in specific.EEAM has the advantage of offering detailed assessment.It is also adaptive for various manufacturing techniques of numerous FRP structures.By EEAM, the eco-efficiency of wing rib manufacturing is assessed in this work.After defining the wing rib as a functional unit and the manufacturing unit processes within a clear system boundary, the included elementary and intermediate flows are determined.Then, the manufacturing process is modeled and visualized.LCI is performed to collect the data for the parametrization.As a part of LCIA, EEAM results facilitates the detection of the manufacturing bottlenecks.This assists the decision-makers in identifying the proper development as direct applications.In aerospace industry, load transmitting structures such as wing ribs are considered as complex composite structures .In this paper, an aircraft wing rib made of CFRP is studied.As it is shown in Fig. 3, this rib has the configurations of about 1.35 m length, 0.29 m height, 0.05 m depth, 3.2 kg mass, and 0.008 m skin thickness.The CFRP rib is manufactured by the technique of in-autoclave SLI and a ML preforming process.Within the LOCOMACHS project, the application possibilities of such ribs are studied for a modern commercial aircraft .Technical system boundary is summarized by defining the assessed manufacturing technique.From the discrete open mold techniques of LRI, SLI is used in manufacturing the aircraft wing rib in this case study.In this process, a microwave autoclave with about 8.04 m3 capacity, computer-numerical control cutter, and a double ribs mold are implemented.Furthermore, other equipment such as air blowers, matrix vessel, and single vacuum pump with 8.5 m3/h performance are utilized.The materials implemented include non-crimp fabrics, thermoset epoxy resin as matrix, and other ancillary materials such as tacky-tape, vacuum bags, adhesive tapes, acetone, release agents, and different types of gloves.In this paper energy is defined to be the energy mix that is used in electricity form.In practice, wing ribs have been manufactured in DLR laboratories within very low production volume.However, the equipment utilization is calculated for industrial scale with consideration of affiliated maintenance costs.Mold cost distribution has been adjusted to match industrial series production as well.Beside the technical boundary, the definition of system boundary describes the geographical and temporal boundaries of the studied process .After defining the technical, geographical, and temporal boundaries, the manufacturing process is visualized.Within the simplified LCA, manufacturing process of CFRP is defined as a product system that consists of several unit processes.These unit processes have quantifiable input parameters as elementary and intermediate flows .Process outputs on the other hand represent the assembly-ready CFRP structure and process eco-efficiency impact.Ecological and economic inputs as well as outputs are generically categorized and described, as it is shown in Fig. 4.Ecological elementary and intermediate flows include either materials or energy .However, in this case study, these flows are considered for both eco-efficiency aspects.Hence, the material flow is split into three categories that consist of fiber, matrix as well as ancillaries.The ancillaries have been defined to be materials that are utilized to perform the process without being an element of the final structure .To cover LCCA, labor and equipment inputs have been taken into account as well.Beside the CFRP structure, the product system outputs comprehend the carbon footprint, direct cost as well as material waste.Material waste consists of the wasted fiber and matrix materials.Generally, the manufacturing cost composes of direct and indirect costs .In this study, only the direct manufacturing cost is assessed.Direct cost includes fiber and matrix materials, labor work, equipment operation, ancillaries, and energy.However, due to the minor impact it shows in previous studies, facility rent cost is neglected in this work.Although the total carbon footprint and direct cost include the waste impact, waste is displayed separately for the decision-makers.In this case study, the in-autoclave SLI and ML manufacturing technique of CFRP is split into a chain of discrete unit processes, as Fig. 5 shows.This separation between the studied unit processes facilitates an independent assessment of each unit process.It also enables the development of proper direct applications for them.As it is demonstrated in Fig. 5, these unit processes are correlated with each other through intermediate flows .Within this visualization, allocation rules have been defined and implemented as Fig. 5 shows.These rules specify in which unit process each elementary or intermediate flow is to be considered.CFRP manufacturing model illustrates the unit processes from fiber cutting by CNC cutter all the way to the wing rib finishing.In preforming, the fiber cuts are draped on mold and formed by applying heat and pressure.Preparing unit process meant to include the preparation of mold, infusion system, autoclave as well as vacuum bagging.Infusion consists only of the infiltration process where the matrix material is allocated.In curing, autoclave is implemented to consolidate the impregnated preform.While the unfinished structure is released within demolding, finishing includes the machining, trimming, and cleaning to produce an assembly-ready CFRP structure.Based on the process visualization, the associated process parameters are collected within LCI.After visualizing the process and defining the associated parameters, these process parameters are collected within the LCI.In Table 2, examples of such collected data from manufacturing wing rib at DLR are shown.For the discussed unit processes, these input parameters include the main elementary flows for both eco-efficiency aspects.The process parameters are collected from CFRP manufacturing as primary data within the LCI between 2014 and 2016 at DLR laboratories in Germany.In Table 2, the demonstrated parameters include the used electricity, fiber, matrix, and their wastes.Table 2 also shows the operation duration of autoclave and CNC cutter as well as the labor work hours.The electricity amount represents the summation of all electrical energy used throughout the manufacturing of a rib divided by the rib mass.Similar to that approach, fiber, fiber waste, matrix, and matrix waste are measured from the entire manufacturing.In Table 2, equipment operation and labor work time are also measured and calculated for each kg of wing rib.Characterization factors are collected within the LCI according to the system boundaries.Selections of ecological and economic characterization factors are illustrated within Table 3 and Table 4 respectively.A selection of ecological characterization factors, their CO2-equivalents, their orientation within elementary flows, as well as their temporal boundaries are shown in Table 3.On the other hand, a selection of cost associated parameters based on DLR internal EEAM-database from 2015 is shown in Table 4.As it is mentioned previously in the simplified equation, LCIA is performed to assess the eco-efficiency impact.LCIA is based on these process parameters and characterization factors which are gathered within the LCI.To accomplish an eco-efficiency assessment in this study, the carbon footprint and direct cost are computed by EEAM.As a LCIA tool, EEAM communicates with the different data sources required to calculate the carbon footprint and direct cost.Besides assessing the total impact, EEAM assesses the carbon footprint and direct cost of each unit process and elementary flow.Based on the integrated framework, the functionality of EEAM is summarized within Fig. 6.From LCI, the collected process parameters are categorized within generic structure of input data.Due to the scope definition of FRP manufacturing process as a set of unit processes, data from each unit process are collected in separated Excel-spreadsheets.As user friendly model, EEAM user has confined tasks that include distributing the Excel-spreadsheet on each unit process.As a part of LCI, the user documents the process parameters in these sheets based on the system boundary definition in Fig. 5.In practice, such Excel-spreadsheet facilitates the data collection task for the field workers.This spreadsheet is structured generically as a table of inputs from all unit processes.It is adaptable for various manufacturing techniques of different FRP structures.Finally, the user needs to activate EEAM, whereas no extra process modeling is required.Moreover, the user can optionally update the characterization factors.In Fig. 6, these user tasks are illustrated with solid arrows.After filling the spreadsheets with data, EEAM collects these data under defined flow categories including materials, labor, equipment, ancillaries and energy.On the other hand, FRP associated ecological and economic characterization factors are gathered from literatures, suppliers, and internal studies.Examples of these characterization factors have been presented previously in Table 3 and Table 4.These characterization factors are automatically uploaded within EEAM-database and synchronized in each assessment to have up-to-date results.Based on previous studies, about 330 process parameters within clearly distinguished categories are listed in the EEAM-database and integrated in the spreadsheets.These studies include for example processes deduced from internally assessed manufacturing of FRP structures such as aircraft wing leading edge, L and T-shaped FRP structures, FRP pressure vessels, as well as wind rotor blades.In EEAM the assessment is conducted through the correlation between the spreadsheets and the EEAM-database by a python-based tool.This tool connects the spreadsheets, collects the inputs from them, synchronizes these inputs with EEAM-database, and calculates the outputs.As a LCIA tool, EEAM reports the carbon footprint and direct cost results statistically and visually.In case of a new process parameter, EEAM adds this input to EEAM-database and integrates it into a new up-to-date version of the Excel-spreadsheet.As they are shown with dashed lines in Fig. 6, these computer-based activities are performed automatically whenever an assessment is activated in EEAM.Based on EEAM results, developments can be suggested by decision-makers in the form of direct applications.Direct applications can include any eco-efficient management or technical developments.The impact of such direct applications can be estimated within EEAM as well.EEAM anticipates the benefit of such developments in both eco-efficiency aspects.Fig. 7 describes generically the possible impact behaviors of direct applications on eco-efficiency.Fig. 7 illustrates the benefit of direct applications in two main fields.These benefit fields include the eco-efficiency field which serves both aspects and a single aspect dedicated field.To enable interchangeable generic illustration, curve can represent the ecological aspect when curve is the economic aspect and vice versa.For clarification, three generic imaginary direct applications are represented by, and.These direct applications which have different eco-efficiency impacts can be applied as process modifications to a conventional process.Moreover, the benefits of these direct applications are shown for both aspects on the vertical axes.Generally, for aspect in Fig. 7 the benefit is increasing by the direct applications, and respectively.However, regarding the benefit in aspect these direct applications have different impact behaviors.While direct applications has positive benefit impacts on both and aspects, it is considered in the eco-efficiency benefit field.Compared to direct application, the direct application has an increasing benefit for and no change in benefit on.The direct application increases the benefit impact on aspect but decreases the benefit impact on compared to the prior and direct applications.However, as generic illustration neither the value nor the correlation of the curves in Fig. 7 is relevant.Furthermore, examples for direct applications are selected based on the results from EEAM in this work and applied to this generic illustration.This illustration assists decision-makers in classifying possible direct applications based on their eco-efficiency impacts in order to enhance the proper applications.EEAM calculates both carbon footprint and direct cost results for the assessed process.For carbon footprint, results are calculated in CO2-equivalent per each kg of CFRP wing rib .On the other hand, direct cost impact is compiled in Euro per each kg.In addition, fiber and matrix wastes are displayed in order to illuminate more eco-efficient developments .Within a report, the exact values of carbon footprint and direct cost are presented to the decision-makers.From the ecological results it is concluded that the carbon footprint of manufacturing each kg of CFRP wing rib is around 109 kg CO2-equivalent.It is also found that fiber and electricity are the main contributors in carbon footprint, as it is shown in Fig. 8.Furthermore, the impact of these elementary and intermediate flows is reflected within the associated unit processes.As a result of the allocation of these flows, the highest impact appears in the unit processes where fiber and energy are allocated, as it is shown in Fig. 9.It is obvious from Fig. 9 that the cutting has the highest impact, whereas the fiber is allocated in this unit process.This impact is mainly a result of the selected NCF type which has a high CO2-equivalent, as it is shown in Table 3.As the highest energy consuming, curing represents the second unit process in producing carbon footprint.Due to the manual work, demolding and finishing have negligible ecological impact.Considering wasted fiber and matrix, these wastes contribute in about 36% of the total carbon footprint.About 50% of the utilized fiber material is wasted in cutting due to the shape complexity of fiber cuts.Matrix waste represents about 23% of the used matrix in infusion.The manufacturing direct cost of assembly-ready wing rib is about 584 € per kg.EEAM results show that labor cost constitutes with almost the half of the manufacturing direct cost, as it is shown in Fig. 10.Fiber and matrix compensate about 35% of the cost impact.Moreover, Fig. 11 demonstrates the cost distribution among the manufacturing unit processes, whereas the allocation of elementary and intermediate flows plays again the main role in this distribution.Again play fiber and fiber wastes a significant role in cutting cost, whereas half of fiber cuts are wasted.Labor cost is distributed unequally among all the unit processes.It has a significant cost impact on preforming and preparing due to the ML, vacuum bagging, as well as mold and autoclave preparation.Equipment cost is significant in curing due to the autoclave application.It has also a clear impact in cutting, while the CNC cutter is utilized.From EEAM results, the eco-efficiency performance of the manufacturing process in general and the unit processes in specific is presented to the decision-makers.These results facilitate the identification of the process bottlenecks in order to develop suitable direct applications.Both ecological and economic aspects of EEAM results for CFRP wing rib manufacturing are reviewed here.Based on this revision, further suitable developments are suggested as direct applications.The impacts of these direct applications are anticipated for decision support purposes.Finally, the assessment results are evaluated by performing a set of validation activities.From analyzing the compiled results, it is concluded that fiber and energy dominate the carbon footprint.Due to the high CO2-equivalent of the selected fiber for aerospace structures, fiber has the highest ecological impact.Energy is highly consumed by autoclave in curing.This consumption depends on the process duration and curing cycles.By implementing proper direct applications, eliminating fiber waste can avoid about 36% of the total carbon footprint and reduce about 17% from the total direct cost.The significant fiber waste is a result of the shape complexity of fiber cuts.This complexity disables an efficient cutting that is based on the correlation between the fiber cuts orientation and the CNC cutter capacity.Moreover, about 26% of the manufactured unfinished wing rib is wasted.This CFRP waste is produced in the machining and trimming work within the finishing unit process.By EEAM, decision-makers can evaluate their direct applications.To achieve more eco-efficient process, decision-makers should enhance direct applications which are beneficial for both aspects.In aerospace industry there is still a significant potential for eco-efficiency developments such as direct application in Fig. 7.Whereas, the maturity of manufacturing process is lower than other industries such as automotive industry.As an example of direct application, waste reduction plays a decisive role in the future of CFRP implementation.For instance, eco-efficient cutting, infusion, and finishing solutions are required.Hence, the cost and carbon footprint can be simultaneously reduced by the waste reduction in these unit processes.Other direct applications can serve a single aspect, which are generically represented by direct application in Fig. 7.In practice, the impact on economic and ecological aspects differs when it comes to energy reduction.On the one hand, reducing the energy consumption leads to a minor cost reduction.On the other hand, this energy reduction decreases the carbon footprint significantly.Therefore, energy reduction can be represented by the direct application, when is the ecological aspect.For economic aspect, labor work reduction represents a beneficial direct application.Practically, labor work reduction has no impact on the ecological aspect.In Fig. 7, labor work reduction can be also represented by the application, when is the economic aspect.Considering as the ecological aspect, the in Fig. 7 showed direct application represents the example of applying environmentally friendly fiber that has a higher direct cost.On the other side, direct application can be the implementation of highly automated process as well.Such automated process can have a lower direct cost but a higher carbon footprint.In this case the curve represents the economic aspect.Based on the results of this work as well as the knowledge about aerospace industry, wide range of eco-efficient direct applications can be enhanced.They include not only eco-efficient technical but also management developments.Practically, decision-makers can still favor applications that are beneficial for one eco-efficiency aspect regardless of their impact on the other one based on the situation.In order to ensure the reliability of the compiled results, these results are validated.Based on the qualification of conceptual model, process model has been adjusted iteratively to reach a sufficient representation for the real manufacturing process.The second step is to check the holism of the report that is generated by the EEAM-python-tool based on the collected data.In LCA, the evaluation includes completeness and sensitivity checks .Completeness check examines the availability and entirety of the data.As it is shown previously in Fig. 6, the EEAM is based on field data collection, literatures, data from suppliers, as well as previous internal studies.Within LCI the data are collected from identical manufacturing events for ten wing ribs.For real field data, data unavailability within the system boundary represents a minor problem.Furthermore, sensitivity check is performed by comparing the results of this work to the previously reviewed studies.Witik et al. results show similar economic behavior.However, comparing Das results from automotive industry with aerospace industry results illuminates significant differences between the manufacturing cycles time which leads to an enormous cost difference.Hagnell et al. have studied several assembly cases for the entire wing box.In these cases, the wing ribs are constantly handled, where the study focuses on the differences between assembly scenarios.For low production volume, Hagnell et al. work shows a close result.The results of both Gutowski et al. and Haffner studies differ from the results of this work due to the variation in temporal, geographical, as well as technical system boundaries.The results of the majority of discussed literatures and this work vary remarkably.Therefore, this paper can contribute with its results in illuminating new detailed perspectives in assessing the eco-efficiency of manufacturing complex CFRP structures such as wing ribs in aerospace industry.This paper presents the case study of assessing aircraft wing rib structure made of CFRP for a modern commercial aircraft.The assessment is performed within an integrated LCA and BPR framework for decision support in manufacturing.For FRP manufacturing processes, the computer-based EEAM is developed as a decision support tool.In this paper, EEAM assesses CFRP manufacturing performed by the in-autoclave SLI and ML technique at DLR.Considering existing associated literatures, the results of some literatures vary remarkably from this paper due to the distinction in manufacturing techniques and system boundaries.Other literatures conclude similar results which validate the results of this work.The results of this work illuminate possible direct applications for the decision-makers.Lean manufacturing is an example of such management tools.On the other hand, advanced technologies for reducing the fiber waste during cutting are also needed.Moreover, energy consumption within curing can be significantly reduced.This might be achieved by more efficient autoclave utilization through reducing the curing cycle time and manufacturing multi ribs simultaneously in each cycle.Fiber and matrix waste can be also reduced by minimizing the machined part in finishing.Furthermore, implementing more automated processes can lead to a reduction in labor cost.In aerospace industry, certifications and regulations should be considered in such direct applications.A gate-to-gate assessment of carbon footprint and direct cost of manufacturing process is a cornerstone in performing a cradle-to-grave assessment.Such a cradle-to-grave eco-efficiency assessment is crucial for the future of the CFRP implementations in aerospace structures.In practice, huge efforts are required for data collection in an activity-based eco-efficiency assessment.Moreover, precise system boundary, unit process definition, and elementary flow allocations are crucial and effortful.Therefore, data collection can be enhanced through the implementation of smart measurement systems that reduces the LCI efforts in eco-efficiency assessment."The research leading to these results has received funding from the European Union's Seventh Framework Programme under grant agreement n°314003.The authors declare no conflict of interest.
Carbon fiber reinforced polymers (CFRP) are frequently used in aerospace industry. However, the manufacturing carbon footprint and direct cost are obstacles in the way of adopting CFRP in further aerospace structures. Therefore, the development of a combined ecological and economic assessment model for CFRP manufacturing is demonstrated in this paper. This model illuminates the proper developments for the decision-makers. In this work, the eco-efficiency assessment model (EEAM) is developed based on life cycle assessment (LCA) and life cycle cost analysis (LCCA). EEAM is an activity-based bottom-up decision support tool for the manufacturing process of fiber reinforced polymer (FRP). This paper discuses a case study of manufacturing CFRP wing ribs for a modern commercial aircraft as a part of the project LOCOMACHS. Ecological results of EEAM conclude that the carbon footprint of manufacturing wing rib made of CFRP thermoset by the technique of in-autoclave single-line-injection (SLI) is around 109 kg CO2-equivalent for each kg of CFRP. Moreover, fiber material is the main contributor in this carbon footprint. On the other hand, the economic assessment shows that the studied rib has a direct manufacturing cost of about 584 €/kg. In these results, labor work dominates the direct cost with 49%, while fiber and matrix compensate about 35%. As an activity-based assessment model, EEAM guides the decision-makers toward sustainable direct applications. It is concluded that direct applications for fiber waste reduction are beneficial for both eco-efficiency aspects. Energy consumption reduction is ecologically beneficial, while labor work reduction on the other hand is cost relevant. In aerospace industry, there is a clear potential for eco-efficient direct applications that satisfy both aspects.
422
Engineering serendipity: High-throughput discovery of materials that resist bacterial attachment
Antimicrobial resistance has been predicted to rival cancer as both a cause of death and an expense to healthcare systems by 2050 .Bacteria inflict significant human suffering through acute and chronic disease.Infection has serious impacts upon morbidity and mortality .Socio-economic factors associated with infectious diseases have negative influences upon trade, commerce and social development .Infectious diseases are problematic for both patients and society as a whole, for example the cost of Clostridium difficile infections in the US alone is estimated at over $796 million and the total cost for healthcare-associated infections is between $28 billion and $45 billion per year .A recently published prevalence survey identified that in 2011 medical-device associated infections accounted for 25.6% of healthcare associated infections .Many types of devices, such as venous catheters and prosthetic heart valves, become colonised by bacteria which can subsequently form biofilms and cause infection and device failure .In 2009 it was estimated that in the US alone there are a quarter of a million central-line-associated bloodstream infections annually leading to 31,000 deaths per year .The treatment of device-associated infections often proves particularly challenging as micro-organisms within a biofilm are able to protect themselves from the immune system and antibiotics .Bacteria are estimated to be 10–1000 times more tolerant to host defences and antibiotics than in their planktonic state .For example the bactericidal concentration of a particular systemic dose of vancomycin for Staphylococcus epidermidis increases from 6.25 micrograms/mL to 400 micrograms/mL when the bacterium moves from the planktonic state to form a biofilm .Following years of repeated and prolonged use and mis-use of antibiotics, anti-microbial resistance is now a global threat .A preferred approach is therefore to avoid the use of antibiotics and biocidal agents and to prevent the development of device associated infections by preventing surface colonisation and biofilm formation.The first stage of biofilm formation involves the initial attachment of individual bacterial cells or small bacterial aggregates, which is usually preceded by adsorption of biological macromolecules .Systems designed to prevent bacterial attachment aim to disrupt biofilms at the earliest possible stage.Polymer materials are well suited to biofilm prevention.It has been shown that they can be readily tailored by variation in their chemistry to achieve non-fouling effects .One problem is that the required chemistry cannot readily be predicted from first principles.The approaches employed to resist biofilm formation are either the production of cytotoxic materials designed to kill bacteria upon contact , which are unlikely to select for resistance , or anti-adhesion strategies whereby the materials circumvent bacterial attachment, biofilm formation and the hence associated increase in resistance to antibiotics and host defences.Compared to antibiotic containing materials, surfaces that resist bacterial attachment do not induce the evolutionary pressure which would lead to bacterial resistance.This characteristic means that this class of material is of particular interest in an age of growing antibiotic resistance.A number of anti-fouling polymer strategies have recently been reviewed by Rosenhahn et al. .The mechanisms that have been employed to prevent attachment include electrostatic repulsion, steric repulsion, topography and hydration.Kosmotropes, which stabilise proteins in their native form, particularly poly and zwitterionic polymers have attracted the most research attention as anti-fouling films for their ability to prevent cell attachment .Their discovery through early observations of their resistance to coating by proteins have now led to wide spread use .Poly acts by hydrogen bonding to up to three water molecules per repeating ether group.Through this mechanism, the complex is sterically stabilised.In order for protein to adsorb the chains must be compressed and water released.The removal of water from the chains has an enthalpic cost.The compression of the chains has a corresponding entropic expense .However hydrophilicity and steric hinderance alone do not explain its efficiency suggesting its unique solution properties contribute to its non-fouling ability .Zwitterionic polymers are electrically neutral yet tightly co-ordinate water through ionic interactions.This results in a highly hydrophilic surface where, similarly, the removal of water is entropically unfavourable .However, protein adhesion is a process which one can reasonably assume may be described by physicochemical processes.It is well known that mammalian cellular adhesion is regulated by protein adsorption through integrin–peptide interactions .There have been attempts to use physicochemical parameters to explain bacterial attachment .Recently superwettable materials have gained interest to aid understanding of bacterial interactions with certain materials.Bacterial surface components such as peptidoglycan have shown to influence adherence with surfaces of varying hydrophilicity .However predicting attachment across our wide chemical libraries using water contact angle has not been successful, apart from in very limited subsets of similar materials, leading us to conclude that the use of the wettability as a surface descriptor is not helpful in understanding bacterial–surface interactions .This points to the importance of sophisticated bacterial sensing mechanisms and downstream cellular responses in determining their responses to a specific material.Bacterial cells do not behave as inanimate objects but possess complex regulatory network systems for sensing and mechanics, for example type IV pili and other fimbrial types that help determine their reactions to different surfaces.This flexibility presents a difficult problem to identify attachment resistance surfaces.Despite significant on-going research into anti-adhesive surfaces there has been a lack of translation from successful laboratory-based systems to clinically useful medical devices, many of which still employ high-fouling surfaces.Some instances where translation has been successful include the PolySB coating based upon work by Loose et al. at Massachusetts Institute of Technology and Avert™, a poly and biguanide coating produced by Biointeractions Limited .Using an drug eluting system, J. Kohn developed TYRX™ which employs a polymer discovered in a high throughput screening campaign.The product is constructed of a polymer, containing rifampicin and minocycline which may be absorbable or non-absorbable .A retrospective, observational analysis demonstrated the coating to reduce the risk of infections and patient mortality .Busscher et al. discussed the issues facing translation of outcomes from scientific studies into successful biomedical devices.The delicate interplay between bacteria and host cells, such as epithelial cells, that leads to colonisation of an implanted surface has, thus far, been difficult to replicate in laboratory studies.Improving these ex vivo models should help improve translation to the clinic .These experimental issues are compounded by the number of patients and the duration required for clinical trials to demonstrate efficacy and safety unequivocally .Beyond innovation barriers, new approaches are required between academic discovery groups and those in industrial and regulatory areas to increase translation to improve health outcomes .For example, in academia there is a need to publish and this can often be preceded by inadequate patenting or none at all.They believe that this lack of intellectual property demotivates industrial players from developing new ideas which could drive further translation.Existing biomedical materials have arisen from a combination of accessibility and utility.For example silicone rubber, originally developed as an electrical insulator, finds common application as a medical device material due to its useful mechanical properties and inert nature.Polymethacrylates originally used in the plastic canopies on warplanes were subsequently exploited for use in intraocular lenses since it was noted that polymer fragments trapped in pilots’ eyes were well tolerated .Whilst the mechanical and other bulk properties may be optimal for these materials for a given application, the tendency to promote bacterial attachment and biofilm formation is not.Consequently, modification of these basic materials offers one route to new biomaterials which has been explored, e.g. copolymerisation and surface functionalisation.However, an alternative is to start afresh and ask the question, what would the optimal material be for a particular application?,The aim of this article is to guide the reader through steps taken thus far to control bacterial biomaterial interactions and how this area could develop in a different but exciting direction by answering this question.Over the last decade, dramatic advances have been made through both hypotheses relating material properties to cellular responses, and discovery of new materials made using high-throughput screening .Useful illustrations of the step change jumps in knowledge are the material property-cell differentiation relationships characterised for mesenchymal cells.These include Engler and Discher’s data that substrate stiffness could direct stem cell fate , Dalby and Oreffo’s observation that nanotopography could control cell differentiation and Anseth’s correlation of chemistry in 3D cell encapsulating gels with differentiation lineage .However the relative contributions of these effects on cell fate are poorly defined and similar relationships have not been established for microbial cells.Despite these advances, rational design of new biomaterials is still hindered by the paucity of information on the physicochemical parameters governing the response of different cell types of interest to a broad range of materials.The rational design roadblock for biomaterials has promoted an interest in the application of data driven, high-throughput screening approaches that can be applied to any cell type and adapted to model a particular application area/service environment .Mounting screening campaigns with large material libraries could be referred to as engineering “happy accidents”, or serendipity.This was first illustrated in 2004 for polymer micro arrays by Anderson et al. for stem cells and also exemplified by the Bradley group .In addition to the identification of hit materials for further development towards materials for cell control, the large amount of information generated on the biointerface can be used to obtain new insights.To achieve structure–property relationships requires analysis of the surface chemistry rather than assumption of its identity from the input monomers, which is where the development of high-throughput surface characterisation has been critical .This has facilitated correlations of the surface chemistry and monomer identity with bacterial attachment to move towards rational design , combining an understanding of both the organism and materials to propose new biomaterials ab initio .The success of the data driven scientific method illustrated using the high-throughput screening approach demonstrates the validity of its application for materials discovery.Recently, Autefage et al. used a non-reductionist approach to understand the influence of strontium ion incorporation into 45S5 bioactive glass.Their unbiased investigation found a number of changes, including increases in cellular and membrane cholesterol content and in phosphorylated myosin II light chain which may not have be identified from biological predictions alone .This contrasts with reductionist approaches favourably, indicating the need to identify new materials but then understand their mode of action through hypothesis driven investigation .Brocchini et al. in 1998 applied the combinatorial materials screening approach to polymeric biomaterials discovery.The authors synthesised separately but in parallel, 112 biodegradable polyacrylates from 8 aliphatic diacids and 14 diphenols before subsequent surface coating and testing .Significantly, the authors discovered that while increasing the surface wettability improved fibroblast growth, backbone substitution with an oxygen also improved growth without affecting surface hydrophilicity.This showed that single characteristics such as water contact angle cannot be used to guide material development for complex living systems.The procedure allowed many materials and their characteristics to be analysed, however, the separate syntheses and subsequent coating made the process unacceptably slow for ultra high-throughput novel materials discovery where rapid evolution from one generation to the next in response to cell response data is desirable.In order to optimise the rate at which new biomaterials could be discovered and their biological properties assessed, the microarray format has now become routine.In this way, hundreds of unique polymers are generated on-slide and assayed on a single substrate in a single experiment.The group of Langer et al. first published the use of microarrays to screen hundreds of materials for their effects upon the growth and differentiation of human stem cells .This initial report described 576 unique polymers in triplicate, generated in situ by printing monomers pairwise into an array and curing using UV on 25 × 75 mm pHEMA coated glass slides.Commercial monomers were employed to enable ready access to a large chemical space.This enabled rapid, simultaneous assays to be carried out in parallel.The new platform set the precedent for high-throughput material discovery for the purposes of controlling cell attachment and growth including bacteria .The different methods which can be used to prepare microarrays have been reviewed recently in the literature .Subsequently, there have been a number of notable successes for the high-throughput screening approach for discovery of novel biomaterials including the identification of a new class of polymers resistant to bacterial attachment that has potential as medical device coatings .More recently, also a series of materials that allow long term renewal of pluripotent stem cells .These finds have been achieved using unbiased screening or “fishing expeditions” a commonly used pejorative description of this approach.When considering materials from which to manufacture medical devices with reduced rates of infections, the question is, are there materials better at resisting attachment of bacteria than poly and zwitterionic polymers?,The extension of the high-throughput platform was a logical transition to emulate the successes seen with eukaryotes.Pernagallo et al. produced an early report of an array to identify specific bacteria binding/non-binding polymers.The authors used clinically relevant strains and identified polyacrylate materials which had properties of high binding, low binding and selective binding .The Bradley group later screened a 381 polymer library with up to eleven different bacterial strains/species including those obtained from endotracheal tubes or from patients with infectious endocarditis.Further to their high-throughput screen a number of “hit” polyurethane and polyacrylates/acrylamides were identified.The hit coating, which was a co-polymer of methylmethacrylate and dimethylacrylamide in a molar ratio of 9:1, generated ⩾96% reduction for the bacteria mixtures tested using a microaerobic environment and supplementing growth with BHI after 24 and 48 h .The surface characteristics and properties of materials obtained from combinatorial methods cannot be entirely rationalised based upon the composition assumed from the raw materials used for synthesis, and as such surface analysis is a key component for understanding the materials’ behaviour .To understand, explain and develop a biological observation from novel materials, the surface characteristics need to be described .Traditional polymer characterisation methods do not generally provide information on the upper-most surface which controls cellular response to materials.This is problematic as importantly it is the surfaces of the materials which dictate the observed phenomena.To circumvent this analytics barrier, surface analysis techniques such as time of flight secondary ion mass spectroscopy , atomic force microscopy , surface wettability measured through water contact angles , surface plasmon resonance and X-ray photoelectron spectroscopy allow for rapid characterisation of polymer microarrays.Together with the microarray format, these techniques are known as high-throughput surface characterisation .The polymer array and combinatorial approach can be used to perform a biased screen, when the identity of the members of the library on the array are chosen in order to test preconceived notions of the function of certain monomer or monomer combinations when combined to make polymers.Alternatively an unbiased screen can be performed where the aim is to cover as wide a chemical space as possible in the hope that unexpected relationships may be discovered.This most exciting screening method provides the opportunity to discover new and unexpected materials and materials classes.After the response to a surface has been described, the nature of any interconnectiveness between the surface analysis data and biological response can be explored.ToF-SIMS surface analysis data is complex and multivariate.In order to overcome the challenge of information overload a mathematical technique called Partial Least Squares has been applied.This is done to correlate multivariate datasets such as surface analysis to univariate data, for example the observed biological response .This was successfully used to correlate the attachment of human embryoid body cells with the surface chemistry of the materials as measured by ToF-SIMS .The ToF-SIMS technique is versatile and has been used to identify key spectral ions responsible for reduced or increased bacterial attachment to a library of polymers .Hook et al. generated an expansive combinatorial polymer microarray from 22 commercially available acrylate monomers.These were combined in varying ratios to create 496 unique polymers obtained through photo-initiated free-radical polymerisation.The array was incubated with fluorescent bacteria for three days and the resulting attachment or resistance to bacteria was quantified from the fluorescent signal on each polymer surface.The observed fluorescent signal could be predicted using linear PLS correlation for the bacteria Pseudomonas aeruginosa and Staphylococcus aureus.Cyclic hydrocarbons, esters, tertiary butyl moieties and non-aromatic hydrocarbons were identified from the ToF-SIMS signals as key to reduce bacterial attachment.This contrasted with ethylene glycol and hydroxyl groups that were found to correlate with higher bacterial attachment.These insights were used for the selection of ‘hit’ monomers to be used in a subsequent generation array that screened copolymer series for formulation optimisation.Furthermore, the chemical fragments associated with low bacterial attachment provided insight into the physio-chemical mechanism by which the polymers resisted bacteria.Specifically, the association of both the hydrophilic ester groups and hydrophobic cyclic hydrocarbon groups with low bacterial attachment suggested that the weakly amphiphilic nature of the polymers was key to their function.The scaled up material showed a thirty-fold reduction in biofilm formation compared to commercial antibacterial silver hydrogel.In the group’s later work, the microarray was expanded to generate 1273 individual polymers and clinically isolated pathogenic bacterial strains were included in the screen.From this larger screen, a greater number of materials had their antibacterial properties investigated.Lead hit materials showed up to 99% fewer bacteria attached compared to an antibacterial silver hydrogel .The number of potential polymers that could be synthesised are innumerable, thus, it is not experimentally feasible to screen all possible materials even utilising high-throughput screening methods.To truly assess the interaction of bacteria with the polymeric materials space another approach is required.The use of materiomics, involving computational and experimental approaches with large datasets for biomaterial discovery has recently been reviewed in a number of books and reviews .This equation links chemical parameters of a biomaterial by the number of rotational bonds and hydrophobicity to bacterial attachment on that surface.The challenge in this area, as it was for drug discovery, is to project outside the training set, to identify candidates outside the modelled materials space.The complex data processing requires machine learning systems capable of complex pattern recognition beyond that achieved by PLS modelling .Epa et al. in 2014 utilised a machine learning modelling approach to predict the attachment of P. aeruginosa, S. aureus and Escherichia coli to polymer surfaces from a library of computationally derived molecular descriptors .Following predictions of attachment, several monomers that had not been used to generate the models were tested for their propensity for colonisation by P. aeruginosa.The model successfully predicted those monomers that would produce high and low adherent surfaces.A number of molecular descriptors were invoked to explain the bacterial attachment to the materials, including those incorporating the number and type of chemical functional groups in the polymers, descriptors relating to the ability of the polymers to form hydrogen bonds, the dipole moments, surface wettability, molecular shape, and complexity.The successful utility of these descriptors to predict the biological performance of the materials suggests these properties play a key role in determining how bacteria attach or do not attach.As such, predictive models are useful not only as a materials development tool but also at providing novel insights into the underlying biological–material interactions.The ultimate goal is for modelling techniques to allow us to “dial up” biomaterials and drugs based upon the desired characteristics while minimising experimental time.Although this is far from where we are currently with biomaterials, one can look to the in silico design of aerospace components for inspiration.Polymers offer many advantages as a material platform for manipulating bacteria–biomaterial interactions.Polyvalency and intricate property manipulation can allow for the development of precision surfaces and materials.The ability to produce polymers in a variety of formats, for example as three dimensional networks, soluble agents and surface coatings, allows for their application to a variety of analytic formats and uses.Understanding the interactions of bacteria with biomaterials may be a route to improve functionality.Such intelligence-led approaches have been demonstrated and are guided by knowledge of organisms and their attachment processes .However, the colonisation process, for example, is complex and not all steps are known and understood.However, the future goal is not simply to prevent bacterial attachment but also to control the specific biological responses to a surface.Research is already underway into bioinspired devices where ligands and proteins direct cell behaviours such as colonisation and proliferation, so called “third generation” biomaterials .Beyond this third generation, biomaterials should encourage integration of man-made devices yet be able respond appropriately to biological or chemical cues should infection or rejection occur.For example pH responsive materials have been demonstrated to self-clean in response to local pH reductions due to bacteria colonisation .For greater integration, enzymatic or other cell responses could be used to direct the behaviour of the material.Magennis et al. recently showed a ‘bio-inspired’ approach in which bacterial enzymes mediate polymer synthesis at a cell surface and produces materials more favourable to cell binding than those synthesised in the absence of bacteria .In this work it was demonstrated that polymers with co-monomer incorporation dependent on the templating bacterial species could be formed through polymerisation in the presence of cells.Utilising changes in redox potentials as a result of cell enzyme cascades combined with copper-mediated radical polymerisation and “click” chemistry, polymers with specific affinity to the template were generated in situ.Furthermore, fluorescent tagging of the polymers could take place at the bacterial surfaces directed by the same enzymes.This work showed how in complex biological systems, living processes and cell metabolism can be utilised to direct material properties.Combining these kinds of cellular biological pathways with third generation biomaterials and integrating their development into high-throughput screening approaches may lead to biomedical devices with much-improved functionalities for the future.In summary, high-throughput strategies are leading to novel materials discovery when large numbers can be screened and “hits” identified retrospectively rather than planning those to yield positive results.Looking forward, despite the advantages of high-throughput methods, due to the vast variety of polymers which could potentially be synthesised, it is anticipated that we will see the increasing prominence of computation guided screening campaigns where initial descriptive results are then modelled and theoretical hits generated from virtual libraries.This approach will help to facilitate the next generation of highly functional polymer materials for safe, fully integrated biomedical devices and technologies.The authors have no conflict of interest to declare.
Controlling the colonisation of materials by microorganisms is important in a wide range of industries and clinical settings. To date, the underlying mechanisms that govern the interactions of bacteria with material surfaces remain poorly understood, limiting the ab initio design and engineering of biomaterials to control bacterial attachment. Combinatorial approaches involving high-throughput screening have emerged as key tools for identifying materials to control bacterial attachment. The hundreds of different materials assessed using these methods can be carried out with the aid of computational modelling. This approach can develop an understanding of the rules used to predict bacterial attachment to surfaces of non-toxic synthetic materials. Here we outline our view on the state of this field and the challenges and opportunities in this area for the coming years. Statement of significance This opinion article on high throughput screening methods reflects one aspect of how the field of biomaterials research has developed and progressed. The piece takes the reader through key developments in biomaterials discovery, particularly focusing on need to reduce bacterial colonisation of surfaces. Such bacterial resistant surfaces are increasingly required in this age of antibiotic resistance. The influence and origin of high-throughput methods are discussed with insights into the future of biomaterials development where computational methods may drive materials development into new fertile areas of discovery. New biomaterials will exhibit responsiveness to adapt to the biological environment and promote better integration and reduced rejection or infection.
423
Atomic-scale investigations of isothermally formed bainite microstructures in 51CrV4 spring steel
The formation of bainite has been attracting scientific interest for almost a century.What makes it attractive is the fact that it combines characteristics of fundamentally different phases, which are martensite and Widmanstätten ferrite; this circumstance is the reason why the mechanism of formation of this phase, a mixture of ferrite and cementite, is still object of controversy.The morphology of ferrite and the distribution of carbides depend on the transformation temperature and therefore bainite is usually classified as either upper bainite or lower bainite.When formed at low temperatures, lower bainite shares common characteristics with martensite and it is sometimes even impossible to distinguish them microscopically.This experimental finding suggests that the mechanism of formation of bainite is similar to that of martensite and therefore might be diffusionless in nature.On the other hand, an increasing number of experimental and theoretical studies support the similarity between bainitic ferrite and proeutectoid ferrite with Widmanstätten morphology and the idea that bainite growth is determined by carbon diffusion.Hultgren first proposed in 1947 that upper bainite could form by initial precipitation of ferrite with Widmanstätten morphology followed by cementite precipitation on its sides .The concept was further analysed by Hillert, who has shown that there is no reason to treat Widmanstätten ferrite and bainitic ferrite as different products, as there is no kinetic discontinuity .According to this approach, the bainitic ferrite nucleates at austenite grain boundaries and grows at a rate determined by the diffusivity of carbon.Aaronson et al., supporting the above-described theory, have considered the effect of alloying elements on the bainitic transformation, and the potential segregation of substitutional elements at the growing phase interface .The diffusionless approach to bainite formation was introduced by Zener in 1946 , further developed by Ko and Cottrell , and more recently supported by Bhadeshia .According to this approach, a sub-unit of bainitic ferrite, supersaturated in carbon, nucleates on an austenite grain boundary.The growth is instantaneous and displacive and stops because of the plastic deformation of the adjacent austenite.The bainitic ferrite is initially supersaturated with carbon, which needs to be rejected by diffusion into the residual austenite, in which it can form carbides with para-equilibrium composition.Carbon can also directly precipitate in the form of carbides within the bainitic ferrite sub-unit, if insufficient diffusion can take place due to the transformation temperature being low.Once bainitic sub-units have formed, the bainite formation can continue by the autocatalytic nucleation and displacive growth of new sub-units on the tip of previously formed sub-units.Recent studies have revealed that, although carbon is depleted from bainitic ferrite, bainitic ferrite even after long times retains more carbon than is theoretically predicted by thermodynamic equilibrium.Recent Atom Probe Tomography and Synchrotron X-Ray Diffraction studies observe this phenomenon and some authors claim that the carbon that is trapped in the bainitic ferrite causes tetragonality of its cubic lattice .However, the fact that the carbon does not diffuse out of the bainitic ferrite given the time and the driving force still cannot be explained satisfactorily.The composition appears to be playing a very important role in the morphology of bainite.Especially in high carbon and chromium containing steels, when austenite decomposes at temperatures between 500 and 700 ∘C, non-classical austenite decomposition products were reported.One of them, formed close to the bainite start temperature, is called inverse bainite on the basis of its inverse morphological characteristics with respect to the conventional bainite .Inverse bainite is in fact identified as a phase mixture of carbide plates surrounded by Widmanstätten ferrite.The presence of inverse bainite was used by Borgenstam et al. to support the concept that austenite decomposition occurs by “mirror mechanisms” for compositions lower and higher than the eutectoid composition , and thus claim the generality of the diffusional mechanism in bainite formation.By mirror mechanisms it is implied that bainite and inverse bainite are products of austenite decomposition, which are obtained by similar processes, but with their constituents, ferrite and cementite, having opposite roles.In a bainitic microstructure, a leading phase grows from the prior austenite grain boundary with a Widmanstätten morphology, followed by the formation of a secondary phase.Following this mechanism, at hypo-eutectoid compositions, the leading phase is ferrite, while at hyper-eutectoid compositions the leading phase is cementite.Recent studies on inverse bainite formation have more clearly defined its mechanism in terms of thermodynamics and have provided insight into the microstructural and crystallographic aspects .Besides carbon, substitutional alloying elements such as Mn, Cr, and Mo are known to affect the formation of bainite in different ways.The presence of these elements in steel retards the growth of bainite by inducing a solute drag effect and can also limit the maximum fraction of bainite that can be obtained from an isothermal treatment at a given temperature .Cr addition reduces the eutectoid carbon composition of steels, so even alloys with C content as low as 0.4 wt% can produce microstructures similar to the ones found in hypereutectoid FeC steels.This is especially applicable to the observation of inverse bainite, which is primarily found in hyper-eutectoid steels.Recent studies reported similar microstructures in Cr containing steels with lower carbon contents .Transmission Electron Microscopy studies of inverse bainite microstructures show evidence of a crystallographic orientation relationship between the acicular carbides, the surrounding ferrite and the parent austenite.This was achieved by relating the orientation of retained austenite in partially transformed specimens with the orientation of the carbides and the ferrite sheaves.Information about the composition of the different constituents of the inverse bainitic microstructure is very limited in literature, most of the studies report the carbides as being of the M7C3 type with significant partitioning of Cr from the matrix to the carbide.Detailed study on the composition of the non-classical decomposition products of austenite can help to elucidate the transformation mechanism leading to their formation, as well as the relation between these products and conventional bainite.Due to the fine scale of the microstructure, a high resolution chemical composition analysis technique is required.Atom Probe Tomography is a very suitable technique for such an analysis, especially if combined with site-specific tip preparation and TEM.In this work, the microstructures obtained by isothermal treatment of 51CrV4 steel within the temperature range of bainite formation are studied by means of transmission electron microscopy and atom probe tomography.TEM provides information about the bainitic microstructure morphology at different temperatures, APT enables a detailed compositional analysis of these microstructures.The compositional information can help elucidate the reasons for the wide morphological variety of bainite in medium carbon low alloy steel grades.Samples of 51CrV4 steel were received in as rolled condition.The samples were cut out from hot rolled bars with dimensions 95 × 49 × 5500 mm3.The chemical analysis was performed on 30 × 30 mm2 cross-sections of the bars, perpendicular to the rolling direction, by means of Optical Emission Spectroscopy.The chemical composition of 51CrV4 steel is shown in Table 1.Besides the average chemical composition, local fluctuations in the chemical composition, expected to affect the phase transformations, were measured in the normal direction by Electron Probe Micro Analysis.The details of the measurement, as well as the interpretation of the effect of the segregation on the microstructure formation were previously published by the authors .The concentration profile of Cr and Mn is shown in Fig. 1.The dilatometric specimens with dimensions Φ4 × 10 mm2 were machined using Wire Electro-Discharge Machining.The dilatometric tests were performed in a Bähr 805A Quench dilatometer, with the heat treatment schemes shown in Fig. 2.The specimens were placed in the dilatometer with two thermocouples, spot welded, one at the centre and the other 1 mm from the edge in order to control the temperature and observe its gradient during the treatment.All samples were heated within 60 s to the austenitisation temperature under vacuum and then quenched to an isothermal holding temperature in the range 300–510 ∘C using helium gas.After the isothermal holding, the samples were quenched to room temperature.The quenching rate was high enough to avoid austenite-to-ferrite transformation according to TTT diagrams for the specific chemical composition.This rate was chosen to be 30 ∘C/s.For microstructural characterization, dilatometric samples were mounted on a specially designed sample holder, then ground, polished and etched with Nital 2%.Scanning Electron Microscopy analysis was carried out in a Field Emission Gun Scanning Electron Microscope JEOL 6500F operated at 15 kV.For TEM analysis, specimens were prepared from the dilatometry samples after austenitisation and isothermal holding.Although the temperature gradient within the dilatometric specimen during the heat treatment was found to be within 10 ∘C, in order to ensure that the microstructure observations were consistent with the dilatometry measurements, the sample discs were cut from the central zone of the dilatometric specimen, close to the thermocouple.The TEM discs were manually ground down to 60 μm, and then Ar-ion polished to final electron transparent thickness using a GATAN 691 PIPS system.For the observation, a JEOL JEM-2100 electron microscope operated at 200 kV was used.APT samples were also prepared from the dilatometry specimens using focused Ion Beam milling.Lift-out procedures, as described in Ref. were used to produce the atom probe specimens.The method as described in is employed for APT sample preparation to study the acicular cementite in inverse bainite.Its size on the order of tens of nanometres and its non-uniform distribution in the microstructure made it particularly challenging to capture inverse bainite within an APT specimen, using the conventional lift out procedure.Thus, the method applied in the present study incorporated as a first step coarse FIB cutting at 52∘ and 0∘ sample tilt.The FIB-cut lamella was then placed at an axial manipulator and rotated 90∘ manually after opening the FIB chamber as shown in Fig. 3.After rotation, the lamella was lifted out from the axial manipulator and then welded using platinum on electro-polished molybdenum posts .APT measurements were performed using a local electrode atom probe in voltage mode at a specimen temperature of −213 °C.The pulse fraction and the pulse frequency were 15% and 200 kHz, respectively, for all measurements.APT data analysis was performed using the IVAS software.Calibration of the Image Compression Factor and Kf constant as per the procedure explained in .A peak decomposition algorithm incorporated in the IVAS software was used to decompose the peak at a mass-to-charge ratio of 24.5 Da.The isothermal treatment at 300 ∘C, despite the macro-chemical segregation detected by EPMA, produced a homogeneous bainitic microstructure after 1 h of treatment.Fig. 4a shows the bainitic microstructure produced after 1 h holding at 300 ∘C. Extensive carbide precipitation is evident.The carbides are fine and elongated, aligned at an angle towards the length of the bainitic ferrite plate.For observation at a higher magnification, TEM was employed.The observation in the TEM confirms the presence of elongated carbides parallel to each other within the ferrite plates.No carbides are found at the plate boundaries.Significant presence of dislocations is verified in the specimen and their distribution appears to be homogeneous.In order to obtain compositional information about the carbides and the bainitic ferrite, APT tips were obtained from an area containing both bainitic ferrite and carbides by site specific preparation.In Fig. 5, the reconstructed APT ion maps of C, Cr, Mn and Si are shown.It is evident from the C map that in the analysed volume two large carbon clusters are present.The two large clusters are around 60 nm apart and parallel to each other, resembling the carbides observed in TEM in lower bainite, Fig. 4b.In between those clusters there are numerous smaller C accumulations without showing clearly defined shapes.The maps of Cr and Mn in the analysed areas do not show any signs of partitioning between ferrite and cementite.Fig. 5b. However, Si appears to be the only alloying element that shows concentration difference across the bainitic ferrite/carbide interface.There is slight accumulation of Si of approximately 1.2 at.% at the interface.Quantitative chemical information can be obtained by the proximity histograms.Fig. 5b shows the proximity histogram of the lower large C cluster with the zero position at an isosurface of 25 at.% C.The proximity histogram shows a carbon content of around 25 at.% in the centre of the carbide.Indeed, there is no partitioning of Cr or Mn.The last point of the Cr measurement appears to be high, but the error of this point is high as well, so it can be concluded that there is no significant Cr fluctuation within the carbide.The reduced concentration of Si in the carbides and its increased concentration around the α/θ interface is verified by several measurement points with very limited error margin.The above analysis shows that the carbides formed at 300 ∘C can be identified as cementite formed under non-partitioning conditions for the substitutional elements, with the Si most probably partitioning during isothermal holding, after carbide formation.In order to analyse the smaller carbon clusters observed in the areas between two adjacent carbides, C isosurface maps were constructed at carbon concentrations between 4 and 11 at.%, Fig. 6.This analysis shows that carbon accumulates in three dimensional features, while the other elements do not redistribute.The carbon concentration at these features reaches 11 at.%.The same analysis was performed for the bulk of the bainitic ferrite, this time excluding areas with a carbon content higher than 1 at.%.In this way, the carbon content in solution in the matrix was estimated.The result shows that the carbon concentration in the matrix is on average 0.65 at.%, higher than the equilibrium value 0.27 at.%, determined from the α/γ line in the phase diagram, calculated with Thermocalc and extrapolated to 300 ∘C.In contrast to the isothermal treatment at 300 ∘C, the treatment at 420 ∘C did not produce a homogeneous microstructure.The SEM micrographs show that there are areas in the microstructure that transformed into bainite, with distinct features evident after the etching, but other areas remain featureless.The featureless areas can be identified as martensite/austenite islands, and they form during the final quenching from the isothermal treatment temperature to room temperature, Fig. 7a.The M/A areas contain high concentrations of Cr, Mn and Si.The morphology of bainite after isothermal treatment at 420 ∘C is different from the one found at 300 ∘C.The shape of the bainitic ferrite plates is not acicular; the carbides are coarser and are found at the platelet boundaries.TEM observation provides information at higher magnification, Fig. 7b.The matrix is coarser and shows some granular sub-structure.Dislocations are evident, but there are areas with clearly lower density of dislocations.The carbides are coarser and more elongated than in the case of bainite formed at 300 ∘C.Fig. 8a shows the ion maps for C, Cr, Mn and Si.The C-map reveals two large carbon clusters and a low-carbon matrix, whereas the maps of substitutional elements show a slight enrichment at the interface of the carbides.Fig. 8b shows the proximity histogram plotted from a C isosurface of 25 at.%.The proximity histogram shows that the carbon content of the carbide is around 25 at.% in the interior of the particle, but also that there is a slight increase of Cr and Mn content close to the interface.In contrast, the silicon is depleted from the interior of the carbide to the interface, where Si content reaches a maximum of 1.2 at.%, with the composition inside the carbide being around 0.33 at.%.The APT results suggest that the carbides forming under these conditions are also cementite, which can be enriched slightly in Cr and Mn and depleted in Si.The carbon content of the bulk was measured in this sample following the same procedure.The results indicate that the carbon content in ferrite of upper bainite is 0.35 at.%, higher than the equilibrium value resulting from the extrapolated α/γ line in the phase diagram at the selected temperature, calculated by Thermocalc, which is 0.12 at.%.The specimens that were isothermally treated at 510 ∘C differed fundamentally from the ones transformed at lower temperatures, Fig. 9.The microstructure characterization reveals multiple microstructural constituents.Allotriomorphic ferrite with Widmanstätten secondary plates are evident at prior austenite grain boundaries.Additionally, an aggregate of acicular carbides surrounded by thin layers of ferrite is found to grow directionally from the α/γ interface, coinciding with the prior austenite grain boundaries, Fig. 9a.In the areas surrounding these aggregates, the microstructure remains unetched, and these areas are identified as M/A resulting from the final quench to room temperature.This is consistent with the observation that the isothermal treatment was interrupted before the transformation was complete.The APT elemental maps show multiple areas, in which redistribution of C, Mn, Cr, and is evident, Fig. 10a.There is an area clearly defined by two interfaces enriched in C, Mn and Cr, which outlines the carbide-ferrite aggregate detected in TEM.Within the aggregate, the ferrite film is depleted from C, Cr, Mn and enriched in Si.Approaching the ferrite/carbide interface, the Si concentration reaches a peak and the concentration of C, Cr and Mn gradually increase.Finally, within the cementite, C reaches a peak concentration of around 21 at.%, with the Cr concentration being around 8 at.% and the Mn concentration around 5 at.%, Fig. 10b.The θ/α interfaces are significantly enriched in C, Mn, and Cr and depleted of Si as well, as seen by the proxigram in Fig. 10b.After isothermal transformation at 300 ∘C, the microstructure is homogeneous at the micro-scale, which means that the sample contains areas that transformed into bainite while containing high Cr, Mn, and Si concentrations.In this particular case, it is interesting to measure the composition of the bainitic ferrite and the carbides in order to examine the possible effect of Cr, Mn and Si on the evolution of the transformation.Therefore, for the sample preparation, the bulk composition of the area was chosen to contain a high concentration of alloying elements, so in case alloying element re-arrangement would occur, it would be evident in the measurement.APT carbon maps reveal two precipitates with around 25 at.% of carbon, Fig. 5.Cr and Mn maps show no redistribution of these elements between the precipitate and the ferritic matrix.Si, on the other hand, is found to have a lower concentration inside the precipitate.Since the isothermal treatment is 1 h long and the exact time of the formation of the specific carbides is not known, we assume that Si, similarly to the other substitutional alloying elements present in the sample, did not partition during the formation stage, as it is also claimed for the case of cementite formation in tempered martensite at similar temperatures .Given the fact that the diffusivity of Si in α iron is higher than the the diffusivities of Cr and Mn at 300 ∘C , it is suggested that the redistribution of Si occurred after the precipitation, during the holding at 300 ∘C."The depletion of silicon from the precipitates and the accumulation of this element at the precipitate's interface with bainitic ferrite can contribute to the retardation of carbide growth during further isothermal holding .In this framework, the time of the treatment after the precipitation of the carbides has an effect on the microstructure that is similar to tempering.As far as the other substitutional alloying elements are concerned, there is no difference of their concentrations in the carbides and in the bainitic ferrite.Furthermore, carbon clusters are observed in between the precipitates.Using carbon isosurfaces, Fig. 6, it is shown that these clusters reach concentrations higher than 11 at.% of carbon.The carbon concentration of these clusters is more than the value of 6–8 at.% reported in literature for Cottrell atmospheres, formed by segregation at the dislocations .However, the concentration is less than expected for transitional carbides or cementite.Given the fact that, according to TEM observations and literature , the dislocation density is high for the bainitic ferrite, carbon is expected to segregate at dislocations.Other studies show that close to the precipitates and at the grain boundaries there is an increase in dislocation density and carbon tends to get trapped at such lattice defects.Carbon clusters were found to be located in between the two precipitates, which could be attributed to an increased dislocation density at these locations.The dislocation density could be high at that location due to high transformation strain accommodation by the bainitic ferrite.The high carbon concentration observed in the matrix is consistent with observations of previous studies using APT or synchrotron techniques .An explanation for the increased carbon solubility in bainitic ferrite has been proposed by Hulme-Smith et al. , who support the hypothesis that the higher carbon content observed in ferrite is a consequence of an increased carbon solubility due to the change in symmetry from the conventional cubic unit cell to the tetragonal unit cell.The present observations do however not reveal information on the tetragonality of the bainitic ferrite.In upper bainite, the results show that the transformation produces cementite, but in this case some partitioning of Mn and Cr is observed at the cementite-ferrite interface.This trend is consistent with previous observations by APT performed on martensitic samples, tempered at similar temperature range.As far as the bainitic ferrite matrix is concerned, there is no carbon segregation inside the ferrite region between the two adjacent cementite particles.This is in agreement with TEM observations showing a lower dislocation density in bainite formed at 420 ∘C.The dislocation density in upper bainite, although it was not directly measured or calculated, it can be expected to reach values of the order of 1013 m−2.This assumption is based on the dislocation density value calculated by using tensile test data in tempered martensite sample, as it was explained in Chapter 5 of , and on TEM analysis showing that the bainitic ferrite matrix in upper bainite resembles the one of martensite tempered at 480 ∘C in terms of contrast.The APT tips from this specimen were prepared using site-specific preparation.In this case, choosing an area of strong segregation of Mn and Cr for the analysis was not relevant, since these areas do not transform into bainite.Thus, the tips were prepared from a transformed area, which contained carbides.The SEM micrograph of Fig. 9a shows that the cementite is always found to nucleate at prior austenite grain boundaries.In previously published research of the authors, it was shown that this cementite is forming in bands where high concentrations of Mn and Cr are found.This segregation implies that the local thermodynamic conditions are different from for the rest of the material, and the austenite decomposition at these locations will follow a different sequence, as shown in Fig. 6 of .The austenite in Cr-rich and Mn-rich regions will decompose following the phase formation sequence of hyper-eutectoid steels .It has been reported in such cases that the so-called inverse bainite can be formed.This type of ferrite and cementite phase mixture is forming with cementite directly precipitating at the austenite grain boundary, initiating the transformation.Ferrite subsequently forms surrounding the cementite.In fact, thermodynamic calculations reveal that the addition of substitutional elements in an FeC system changes the range of stability of different phases.The phase diagram isopleths of 51CrV4, shown in Fig. 6 of for the local composition measured by EPMA at regions with high Mn and Cr segregation, reveals a eutectoid composition of 0.4 wt% C.Therefore, in the high Mn and Cr region the austenite of the investigated steel can decompose by forming pre-eutectoid cementite or M7C3 before forming ferrite.Carbides can be found, which are directly surrounded by martensite, without any intermediate ferrite layer, which indicates that the carbides form before the ferrite and thus can act as leading phase for the formation of this microstructure.These facts indicate that the microstructural product formed at temperatures between the temperatures for upper bainite and pearlite formation can be inverse bainite, as was reported for the higher carbon steels in literature .Chromium has a special role in the formation of this non-classical structure.It is mainly responsible for the observed shift of the eutectoid composition to lower carbon contents; it significantly retards the bainite formation and it is found to partition in carbides.A solute drag effect can explain the retardation of bainite growth at high temperatures.The requirement for Cr diffusion for the formation of carbides explains the slow reaction kinetics of this transformation.The morphological and the chemical diversity observed can be further interpreted in terms of thermodynamics.Thermodynamic calculations were performed using ThermoCalc software with TCFE7 database.The α/α + γ and γ/α + γ boundary lines were calculated in the para-equilibrium condition in the range of 600–800 ∘C and then extrapolated to lower temperatures.In Fig. 11 the para-equilibrium Tγ→θ and Tγ→α as well as the orthoequilibrium Tγ→θ lines are extrapolated to lower temperatures.Based on our experimental findings, for the temperatures of 300 ∘C and 420 ∘C bainite formation can proceed under para-equilibrium conditions.However, at 510 ∘C, it was experimentally observed that substitutional alloying elements partition to the carbides and at segregate at the interfaces.Therefore, in this case local equilibrium conditions can be assumed, which are represented by the ortho-equilibrium Tγ→θ line in Fig. 11.For bainite forming at 300 ∘C, in TEM it was observed that there is an initial, carbide-free spine of ferrite forming without precipitation of carbides.This initial plate grows into the surrounding austenite at a rate governed by the diffusion of C.At some point, the supersaturation of C in austenite is sufficient to allow PE cementite precipitation directly from austenite at the γ/α interface according to Fig. 11.The reaction of cementite precipitation consumes the C from the remaining austenite, thus providing driving force for further grow of the bainitic ferrite without further nucleation being necessary.For bainite forming at 420 ∘C, the diffusivity of C is higher, hence the concentration profile at the interface is broader.The undercooling is still high enough for the transformation to occur by a shear mechanism.The higher diffusivity can explain the coarser ferrite plates and the high undercooling its morphology.The cementite in this case is also coarser, but it has another different feature: it is not aligned at a specific orientation, but it follows the γ/α plate boundaries.This happens because cementite forms at the carbon rich areas in the remaining austenite, surrounding the newly formed bainitic ferrite plates.In the sample, isothermally treated at 510 ∘C, ferrite was found at the prior austenite grain boundaries, hence called allotriomorphic.However, it presented facets and sometimes acicular morphology or even degenerate Widmanstätten features.This can be attributed to the low driving force for Widmanstätten ferrite formation.Inverse bainite formation initiates with precipitation of carbides of Widmanstätten morphology at the prior austenite grain boundaries.Ferrite will form surrounding these carbides because carbon will be consumed for the carbide formation, triggering the formation of ferrite.The ferrite grows to a specific width and then, according to apt analysis, significant segregation of Cr and Mn is found at the α/α′ boundary.It is therefore suggested that the coarsening of the ferrite ceases because of solute drag.In the present study, bainite microstructures, which were produced by an isothermal treatment of 51CrV4 medium carbon, low alloy spring steel have been characterized by SEM TEM and APT.It was found that:Bainitic ferrite contains more carbon in solution than is predicted from the thermodynamic equilibrium.The carbon content of the bainitic ferrite becomes higher as the transformation temperature decreases.This could be attributed to the possibletetragonality of bainitic ferrite, allowing higher solubility of C in α-Fe.The substitutional alloying elements did not partition to the carbides during lower bainite formation.Silicon was depleted from the carbides.This is a process occuring after the precipitation.During upper bainite formation, segregation of Cr and Mn occurred at the cementite-bainitic ferrite interface.A peculiar microstructure consisting of acicular carbides and ferrite formed at temperatures close to the bainite start temperature.It can be suggested based on the calculated thermodynamics that this microstructure is inverse bainite.During inverse bainite formation, alloying elements can diffuse within the time frame of the transformation.APT measurements show that Mn and Cr partition to the boundaries of bainitic ferrite and into the carbides.The difference in the chemical composition of the carbides in inverse bainite and the formation temperature can explain their presence and their morphology.
Atomic-scale investigation was performed on 51CrV4 steel, isothermally held at different temperatures within the bainitic temperature range. Transmission electron microscopy (TEM) analysis revealed three different morphologies: lower, upper, and inverse bainite. Atom Probe Tomography (APT) analysis of lower bainite revealed cementite particles, which showed no evidence of partitioning of substitutional elements; only carbon partitioned into cementite to the equilibrium value. Carbon in the bainitic ferrite was found to segregate at dislocations and to form Cottrell atmospheres. The concentration of carbon remaining in solution measured by APT was more than expected at the equilibrium. Upper bainite contained cementite as well. Chromium and manganese were found to redistribute at the cementite-austenite interface and the concentration of carbon in the ferritic matrix was found to be lower than the one measured in the case of lower bainite. After isothermal treatments close to the bainite start temperature, another austenite decomposition product was found at locations with high concentration of Mn and Cr, resembling inverse bainite. Site-specific APT analysis of the inverse bainite reveals significant partitioning of manganese and chromium at the carbides and at the ferrite/martensite interfaces, unlike what is found at isothermal transformation products at lower temperatures.
424
Should tyrosine kinase inhibitors be considered for advanced non-small-cell lung cancer patients with wild type EGFR? Two systematic reviews and meta-analyses of randomized trials
After reports of clinical trials1-5 and clinical guidelines, the use of the tyrosine kinase inhibitors, erlotinib and gefitinib, is now common practice for first-line treatment of patients with non–small-cell lung cancer with sensitizing epidermal growth factor receptor mutations.Beyond first-line treatment, in particular for patients with wild type EGFR who have received first-line chemotherapy, recommendations regarding the potential benefits of TKIs are less clear.6,7,With the exception of a few modern trials that only recruited patients with wild type disease,8,9 most evaluations of these drugs have been in unselected patients.Many of these trials did not test for EGFR mutations systematically, therefore, their results are reported either for all randomized patients, ignoring EGFR mutation status, or for the subset of patients in whom the status was known.Results of these trials are mixed and interpretation difficult.Similarly, attempts have been made to synthesize existing data, to inform clinical guidelines and practice.Some have pooled all available results irrespective of mutation status or treatment line.10,11,This approach gives results based on large numbers of randomized patients, but results are difficult to interpret.Others have attempted to summarize the results of trials within subgroups defined according to EGFR status.12,13,Although this seems sensible and offers a more meaningful interpretation of the results, there are clear drawbacks.For example, trials that did not report their results according to mutation status were excluded, as were patients for whom no EGFR testing was carried out.When only a subset of the randomized patients within a given trial were tested, pooling results might introduce bias,14 not least because we cannot be certain that these patients were representative.Despite these difficulties, there is a need to obtain effective and reliable summaries of the effects of TKIs in patients with and without sensitizing EGFR mutations after first-line treatment.Therefore, we set out to take into account the potential issues and biases.Rather than relying on one analytical approach, we planned a number of separate analyses.In addition to pooled analyses of all patients from all trials and of patient subgroups defined according to EGFR status, we aimed to carry out appropriate tests of interaction between treatment effects and patient characteristics, namely mutation status, to ascertain whether effects differed between patient groups.Finally, by assuming the likely ratios of patients with wild type and mutated EGFR in trials based on geographical location,15 we could assess the effect of increasing proportions of wild type patients on the overall treatment estimates using metaregression.Interpretation of the results was not based on any one of the individual results from the 3 approaches but on the combined results of all 3 approaches, such that we could feel more confident in our interpretation when results from all 3 analyses were complementary.Analyses were carried out in trials of second-line treatment, when TKIs were compared with standard chemotherapy regimens and in the maintenance therapy setting, compared with no active treatment.The authors have stated that they have no conflicts of interest.All methods were prespecified in 2 registered protocols.Included trials should have randomized patients with advanced NSCLC irrespective of sex, age, histology, ethnicity, smoking history, or EGFR mutational status.Patients should not have received previous TKIs.For the systematic review of second-line treatment, trials should have compared a TKI versus chemotherapy after first-line chemotherapy.For maintenance treatment, trials should have compared a TKI versus no TKI after first-line chemotherapy.Systematic searches16 were conducted in MedLine, EMBASE, Cochrane CENTRAL, clinical trials registers, and relevant conference proceedings.We also searched reference lists of relevant randomized controlled trials and clinical reviews.The risk of bias of individual trials was assessed16,17 with a low risk of bias being desirable for sequence generation, allocation concealment, and completeness of outcome data reporting.Trials in the maintenance setting should have also been at low risk of bias for blinding.Progression-free survival was the primary outcome, allowing assessment of the effects of immediate TKI versus no immediate TKI without interference from the use of TKIs on progression.Overall survival was the secondary outcome, accepting this limitation.Data on patient characteristics, including histology, ethnicity, EGFR mutational and smoking status, interventions, and outcomes were extracted from trial reports.When EGFR mutational status of patients was not reported it was estimated based on trial characteristics.Because trials did not necessarily test all patients for EGFR mutations and/or report results according to EGFR mutation status, we used 3 analytical approaches to make maximum use of the data available.Results were assessed for consistency and whether, taken together, they established with greater certainty, the effects of TKIs in patients with wild type EGFR.All interpretation was based on the balance of the results across these 3 approaches and not on any of the individual approaches in isolation.When possible, hazard ratio estimates of effect and associated statistics were either extracted or estimated from the reported analyses18-20 according to EGFR mutation status for each trial.These were used to estimate the interaction between treatment effect and EGFR mutational status, calculated as the ratio of the estimated HRs within the EGFR wild type subgroup and the EGFR mutation subgroup for each trial.14,Interaction HRs were combined across trials using the fixed-effects inverse-variance model.Heterogeneity was assessed using χ2 test and I2 statistic.21,Such interactions are based on a comparison of the treatment effects of the TKIs for patients with wild type and mutated EGFR within trials.Therefore, the interactions might only be calculated for trials in which patients were tested for EGFR mutation status and the HR estimates for the wild type and mutation populations were reported.They cannot be calculated for trials that recruited patients with wild type EGFR exclusively.To estimate the effect of TKI outcomes for patients with wild type and mutated EGFR, using the maximum available data, the trial HRs and associated statistics were combined across trials using the fixed-effects inverse-variance model.Again, we were restricted to patients tested for EGFR mutation status, but could include trials that exclusively recruited patients with wild type EGFR.Heterogeneity was assessed using the χ2 test and I2 statistic21 and where identified, the random effects model was applied.The absolute effect on median PFS was calculated by applying the relevant HR to the average control group median PFS, assuming proportional hazards.Using metaregression we investigated how increasing proportions of patients with wild type EGFR affected treatment effects.All trials reporting overall estimates of effect were included.Estimates of the effect of TKIs on PFS for all randomized patients were extracted or estimated from the reported analyses.18-20,When only a small proportion of patients had been tested for mutation status, we assumed that the proportion of patients with wild type EGFR would remain consistent across the whole trial.When no testing of mutation status had been carried out, or reported in a trial, we estimated the proportions of randomized patients having wild type EGFR to be 90% in western trials and 60% in trials of East Asian patients.We then estimated the change in treatment effect with increasing proportions of patients with wild type EGFR.We also explored whether the TKI used or the chemotherapy regimen used affected the effect of TKIs.Characteristics found to be important were adjusted for in the metaregression and explored using the F ratio.22,We identified 25 potentially eligible RCTs, of TKIs as second-line treatment and maintenance treatment.Of 18 potentially eligible trials, randomizing 4456 patients, 1 trial reported no results,23 2 trials24 and NCT00536107 were not published, and 1 randomized phase II feasibility trial never reached the phase III stage.25,Results were based on the 14 remaining eligible trials.8,9,26-37,Trials compared TKIs with either docetaxel or pemetrexed chemotherapy and were conducted between 2003 and 2012.Six trials were carried out in predominantly Asian populations.Randomized patients had good performance status and median age ranged from 54.5 to 67.5 years.Most were men and either current or former smokers.One trial33 included considerably more women and only never-smokers.Three trials randomized patients with wild type EGFR exclusively.8,9,37,Five trials evaluated EGFR mutation status using a range of methods.Mutation status was not evaluated in 5 trials.Twelve trials reported PFS and 14 trials reported OS.One trial,36 published in Chinese language, was judged to be unclear for all domains.The remaining 13 trials were all at low risk of bias regarding incomplete outcome data.Missing data on EGFR mutational status largely resulted from unavailable tumor samples or because the trials were conducted before widespread testing.All were judged to be at low risk of bias for sequence generation.For allocation concealment, 10 trials were judged to be at low risk of bias and 3 were judged as unclear risk.No trials were judged to be at high risk for any of the domains assessed.8,9,26-37,When information could not be obtained from the publications, we contacted the authors.44,Data on the effects of TKIs compared with chemotherapy on PFS within groups of patients with EGFR mutations and wild type EGFR were available from 4 trials, including 442 patients with wild type EGFR and 113 with EGFR mutations.There was strong evidence of an interaction between the effect of TKIs and EGFR mutational status,8,9,27,29,31,33,35,37 with the benefit of treatment of TKIs evident only among patients with EGFR mutations.This was consistent across trials.Results for patients with wild type EGFR were available for 9 trials and 1302 patients.There was evidence of a detriment with TKIs compared with chemotherapy, with some evidence of variation between the trial results.However, the effect was fairly similar with a random-effects model.Assuming a median baseline PFS of 13 weeks, based on the average time of PFS in the control arms of included trials; HR, 1.31 translates to a 3-week absolute reduction in median PFS.Four trials reported PFS for patients with EGFR mutations.Based on these 113 patients, there was evidence of a benefit with TKIs compared with chemotherapy and no evidence of variation between the trial results.Again, assuming a median PFS of 13 weeks, this translates to a 25-week increase in the absolute median PFS.Twelve trials including 3963 patients reported PFS for all patients, irrespective of EGFR status.Metaregression suggested a decreasing effect of TKIs with increasing proportions of wild type patients.The treatment effect predicted by the model when 100% of patients had wild type EGFR favors chemotherapy, whereas when 100% of patients had EGFR mutations, the model predicted a benefit of TKIs.8,9,26-29,31,33-35,37,No differences in the treatment effects of TKIs versus chemotherapy were observed when trials were subdivided according to chemotherapy used: docetaxel alone, pemetrexed alone, or docetaxel and pemetrexed.There was a difference in the treatment effect according to the TKI used in all randomized patients.However, when the analysis was adjusted to account for substantial heterogeneity within the group of trials using gefitinib, there was no longer evidence of this difference between the TKIs.Additionally, when the TKI type was taken into account in the metaregression, there was still evidence of a decreasing effect of TKIs with increasing proportions of patients with wild type EGFR.Data on the effects of TKIs on OS within groups of patients with EGFR mutations and wild type EGFR were available from 4 trials, including 540 patients with wild type EGFR and 97 with EGFR mutations.Based on the available data, there was no evidence of an interaction between the effect of TKIs on OS and EGFR mutational status.This relationship appeared consistent across trials.We identified 7 eligible trials.No results were available for 1 ongoing trial, therefore, 6 trials38-43 were included.Trials were conducted between 2001 and 2009 and compared TKIs with placebo38,40-43 or observation.39,Five trials randomized predominantly western patients38-40,42,43 and 1 trial randomized only Chinese patients.41,Overall, randomized patients had good performance status; with median age from 55 to 64 years.They were mostly men and either current or former smokers, except for 1 trial,41 in which more than half of the included patients had never smoked.Three trials evaluated EGFR mutation status using a range of methods.Mutation stssatus was not evaluated in the remaining trials.Five trials were judged to be at low risk of bias for allocation concealment, sequence generation, and blinding.38-41,43,One trial was at low risk of bias for all domains except for sequence generation and allocation concealment, which were unclear.42,No trials were identified as being at high risk of bias.Missing data on EGFR mutational status largely resulted from unavailable tumor samples or because the trials were conducted before widespread testing.Progression-free survival results were reported separately in 4 trials for wild type patients and EGFR mutation-positive patients, 908 patients.There was strong evidence of an interaction between the effect of TKIs on PFS and EGFR mutational status, with the larger effect being observed in patients with EGFR mutations.38,39,41,43,There was some evidence of inconsistency in the effect between trials.However, the effect was fairly similar with a random effects model.Progression-free survival results for patients with wild type EGFR were available from 4 trials and 778 patients.There was evidence of a PFS benefit with TKIs in patients with wild type EGFR and no evidence of variation between the trial results.Assuming a median PFS in the control group of 13 weeks, this translates to an absolute improvement in median PFS of approximately 3 weeks.For patients with EGFR mutations, data were available from 4 trials but only 130 patients.Although the data available for this analysis were very limited, there was a large PFS benefit with TKIs but with clear evidence of variation between the trial results.However, the results were similar when a random effects model was used.This translated to an absolute improvement in median PFS of approximately 10 months.Six trials reported PFS for all patients irrespective of EGFR mutation status.The metaregression suggested that treatment effect varied according to the proportion of patients with wild type EGFR.When 100% of patients had wild type EGFR, the model suggested that there is no difference in PFS with TKIs compared with no active treatment, whereas when 100% of patients had EGFR mutations, a large benefit of TKIs was indicated.38-43,However, the metaregression was based on only 6 trials and was clearly limited.We conducted an exploratory analysis to assess whether the benefit of TKIs in patients with wild type EGFR was related to histological type.Data were available for 4 trials and 2129 patients.There was a significant difference in effect between the 2 subgroups with little suggestion of variation between trials.However, benefits of TKI were observed for patients with squamous and adenocarcinoma.Three trials reported OS according to mutation status.We found no evidence to suggest a difference in the effect of TKIs in patients with mutations compared with those with wild type disease.This relationship was similar between the trials.Taken together, evidence from 3 distinct analytical approaches suggests a difference in the effect of TKIs on PFS according to EGFR mutation status.For patients with wild type EGFR, TKIs seem to be an ineffective second-line treatment compared with chemotherapy, but might be effective as maintenance treatment, compared with no active treatment.In both settings, TKIs offer PFS benefits to patients with mutated EGFR.Pooling the estimated interactions between treatment effects and patient characteristics for each trial, using only within-trial information and avoiding ecological bias,14 provides the most reliable estimate of the relationship between the effect of TKIs and EGFR mutation status.Nevertheless, it relies on trials reporting results for patients with wild type EGFR and EGFR mutations separately.Although not all trials tested patients systematically, we have no reason to suspect selective reporting of results or selective testing of patients, which could introduce bias.However, the precision of the estimates of interaction was inevitably limited by the relatively low numbers of included trials reporting results for both mutation subgroups.Incorporating additional data, mainly from trials that recruited only patients with wild type EGFR, enabled us to provide further evidence that TKIs are inferior to chemotherapy as second-line treatment in patients with wild type EGFR, reducing median PFS by approximately 3 weeks.As maintenance treatment, we found that TKIs offer a modest improvement in median PFS compared with no active treatment of approximately 3 weeks for patients with wild type EGFR.We did, however, see some evidence of inconsistency between the trial results in the second-line treatment setting, which might reflect the clinically heterogeneous nature of this large group of patients.There will of course be variation in terms of known prognostic factors such as age, tumor histology, and stage, however, it is also possible that other, as yet undefined characteristics, might further explain this variability.Although pooling results for patient subgroups from trials in this way might introduce bias14 and the meta-analysis is again limited because many trials tested only a relatively small subset of patients for mutation status, or did not test at all, they do provide the best available estimate of the effects of TKIs in patients with wild type and mutated EGFR.Using metaregression, including data from all randomized patients from all eligible trials, we assessed how the proportion of patients with wild type EGFR modified the effect of TKIs on PFS.The results suggest that the benefit of TKIs relative to chemotherapy on PFS diminishes with an increasing proportion of patients with wild type EGFR.This pattern holds for the maintenance setting, suggesting no evidence of a benefit of TKIs when 100% of patients have wild type EGFR.These metaregressions relied in part on assumptions about the EGFR mutation status of trial populations.Furthermore, the relationship between the effect of TKIs and the proportion of patients with wild type EGFR might not be representative of the true relationship between the effect of TKIs and the mutation status of individual patients.14,45,There are clearly limitations to this approach, not only that we had to make assumptions or estimates of the proportions of patients with wild type EGFR for a number of the trials.The relatively small number of data points and that in part because of estimates made, the proportions tend to cluster around either the 60% or 90% positions, mean that the regression model is limited.Nonetheless, this approach has allowed us to maximize the available data to include all relevant trials, and the results do add further support for our other analyses.Although none of the individual approaches is without potential limitations, using results obtained from 3 distinct methods has increased our confidence in interpreting and drawing conclusions regarding the effect of EGFR mutation status on the response to erlotinib and gefitinib in these 2 treatment settings.These systematic reviews are the first to have estimated the interaction between the effects of TKIs on PFS in patients with wild type EGFR and those with EGFR mutations appropriately and reliably.14,Furthermore, they are the first to have based interpretation on a range of analyses, making the best use of all available data and to assess consistency of results.We have attempted to evaluate the effect of treatment in wild type patients, who form the majority of patients with NSCLC worldwide."Since the importance of activating EGFR mutations in patients' response to TKIs were recognized, research has concentrated on patients carrying such mutations.Moreover, a recent review focused on the effect of TKIs in mutation-positive patients and only used the limited reported data from patients who were tested for EGFR mutations.12,Using or estimating the proportions of patients with wild type EGFR, we included results from all trials in a metaregression, rather than only the minority of patients for whom results according to mutational status were reported.We avoided estimating effects based on all randomized patients irrespective of EGFR status, which are potentially misleading.Although we provided clear evidence of a difference in the effect of TKIs according to mutation status on PFS, there was no evidence to support a difference in their effect on OS in either treatment setting.However, most of the included trials allowed treatment crossover on progression, inevitably making the OS results more difficult to interpret.The full effect of TKI treatment on OS in patients with advanced NSCLC therefore remains unclear.In the maintenance setting, our strict eligibility criteria meant that we included only trials in which treatment comparisons were unconfounded.This inevitably limited the number of trials and patients available for this analysis and hence our results must be viewed with particular caution.Furthermore, the comparator in these trials was essentially no active treatment.It is unclear whether the potential benefit of TKIs for patients with wild type EGFR would persist if compared for example, with pemetrexed as is recommended in clinical guidelines6-7 and NICE TA162, NCCN.We did not find any trials that directly compared TKIs with chemotherapy in the maintenance setting.One 3-arm trial included in this meta-analysis compared each of gemcitabine and erlotinib with observation alone39 but no comparison between the 2 treatments was reported.Currently TKIs and chemotherapy are recommended options for second-line treatment for patients with wild type EGFR.Our results bring this into doubt and suggest that for patients with wild type EGFR who are well enough to receive it, chemotherapy should be viewed as the standard of care.Furthermore, particularly outside of East Asia, most unselected patients will have wild type EGFR.Guidelines that recommend TKIs for unselected populations should be reconsidered.Our results highlight the importance of suitable biopsies and reliable EGFR mutation testing to guide optimal clinical treatment.Our findings regarding maintenance treatment, coupled with our results in the second-line setting might lead us to surmise that, compared with appropriate chemotherapy, patients with wild-type EGFR are unlikely to benefit from TKIs.However, for patients in whom no alternative is recommended, for example, in patients with squamous cell carcinoma, TKIs might be considered.Without direct comparisons of TKIs with chemotherapy in this setting, the best treatment options remain unclear.There is still uncertainty regarding the best treatment option for the overwhelming majority of advanced NSCLC patients worldwide with wild type EGFR.However, based on these results, TKIs are not an appropriate second-line treatment for patients who are fit to receive chemotherapy, but might offer some scope as maintenance treatment.
Guidance concerning tyrosine kinase inhibitors (TKIs) for patients with wild type epidermal growth factor receptor (EGFR) and advanced non-small-cell lung cancer (NSCLC) after first-line treatment is unclear. We assessed the effect of TKIs as second-line therapy and maintenance therapy after first-line chemotherapy in two systematic reviews and meta-analyses, focusing on patients without EGFR mutations. Systematic searches were completed and data extracted from eligible randomized controlled trials. Three analytical approaches were used to maximize available data. Fourteen trials of second-line treatment (4388 patients) were included. Results showed the effect of TKIs on progression-free survival (PFS) depended on EGFR status (interaction hazard ratio [HR], 2.69; P =.004). Chemotherapy benefited patients with wild type EGFR (HR, 1.31; P <.0001), TKIs benefited patients with mutations (HR, 0.34; P =.0002). Based on 12 trials (85% of randomized patients) the benefits of TKIs on PFS decreased with increasing proportions of patients with wild type EGFR (P =.014). Six trials of maintenance therapy (2697 patients) were included. Results showed that although the effect of TKIs on PFS depended on EGFR status (interaction HR, 3.58; P <.0001), all benefited from TKIs (wild type EGFR: HR, 0.82; P =.01; mutated EGFR: HR, 0.24; P <.0001). There was a suggestion that benefits of TKIs on PFS decreased with increasing proportions of patients with wild type EGFR (P =.11). Chemotherapy should be standard second-line treatment for patients with advanced NSCLC and wild type EGFR. TKIs might be unsuitable for unselected patients. TKIs appear to benefit all patients compared with no active treatment as maintenance treatment, however, direct comparisons with chemotherapy are needed.
425
Impact of spatial and climatic conditions on phytochemical diversity and in vitro antioxidant activity of Indian Aloe vera (L.) Burm.f.
India is located between 8°-30° N and 68°-97.5° E.There is an extreme variation in altitude-from sea level to heights upto Himalayas.India is floristically rich having about 33% of botanical wealth is endemic.Indian Council of Agriculture Research recognized 8 agro-climatic regions on the basis of physiographic, climatic and cultural features.Different agro-climatic/phyto-geographical regions of India hold rich diversity in both wild and cultivated plant gene pools.There are about 1500 medicinal plants in India.Traditional societies still use native wild plants for medicinal purposes.Aloe vera grows in arid climates and it is widely distributed in India.Aloe vera Burm.f. is a succulent perennial plant with green, tapering, spiny, marginated and dagger-shaped fleshy leaves filled with a clear viscous gel.Aloe vera is the most commercialized Aloe species belonging to the Xanthorrhoeaceae family.This plant grows readily in hot and dry climates.Aloe vera has its origin on the Arabian Peninsula although it is also known in the Mediterranean, the American subcontinents, and India.Aloe vera juice has been used traditionally for its purgative effects and fresh leaf gel used in different formulations and cosmetic preparations.Aloe vera contains over hundreds of nutrient and bio-active compounds, including vitamins, enzymes, minerals, sugars, lignin, anthraquinones, saponins, salicylic acid and amino acids, which are responsible for their medicinal properties.Its secondary metabolites have multiple properties such as anti-inflammatory, antibacterial, antioxidant, immune boosting, anticancer, antidiabetic, anti-ageing and sunburn relief.Several uses of Aloe vera also have been reported such as for burn injury, eczema, cosmetics, inflammation and fever in traditional medicine systems.Reactive Oxygen Species is a term used to describe reactive oxygen and nitrogen species that are common outcome of normal aerobic cellular metabolic processes.During daily activities and with advanced age, oxidative substances and free radicals accumulate in cells affecting various organs and systems in our body.Overproductions of these free radicals are the results of chronic and other degenerative diseases in humans.Uncontrolled production of these free radicals leads to attack on various biomolecules, cellular machinery, cell membrane, lipids, proteins, enzymes and DNA causing oxidative stress and ultimately cell death.Defense against free radicals can be enhanced by taking sufficient amounts of exogenous antioxidants.Micronutrients like vitamin E, β carotene, and vitamin C are the major antioxidants.These must be provided in diet as body cannot produce these nutrients.Medicinal plants are the richest bioresource of drugs for traditional systems of medicine.Since evolution, man has been using plant extracts to improve his health and life-style.Prime sources of naturally occurring antioxidants for humans are fruits, vegetables and spices.Search for the novel natural antioxidants from tea, fruits, vegetables, herbs, and spices are continued as efforts have been made by researchers all over the globe.Herbs are on focus of whole world as a source of novel antioxidant compounds due to their safety as compared to synthetic antioxidants.Many plants have been screened for their antioxidant potential and there is growing interest in replacing synthetic antioxidants because of the concern over the possible carcinogenic effects of these in foods with natural ingredients.Medicinal plants contain a wide variety of free radical scavenging molecules such as phenolic compounds, nitrogen compounds, vitamins, terpenoids, carotenoids and other secondary metabolites which are reported to have antioxidant activity.Several techniques have been used to estimate and determine the presence of such bio-active phytoconstituents in medicinal plants.Chromatography and spectroscopic techniques are the most useful and popular tools used for this purpose.Fourier Transform Infrared Spectrophotometer is perhaps the most powerful tool for identifying the types of chemical bonds present in compounds.The wavelength of light absorbed is a characteristic of the chemical bond as can be seen in the annotated spectrum.By interpreting the infrared absorption spectrum, the chemical bonds in a molecule can be determined.FTIR spectroscopy allows the analysis of a relevant amount of compositional and structural information in plants.Moreover, it is an established time-saving method to characterize and identify functional groups."The changing temperatures and wind patterns associated with climate change are causing noticeable effects on the life cycle, distribution and phyto-chemical composition of the world's vegetation, including medicinal and aromatic plants.Alongside its diverse geography, India is a home to extraordinary variety of climatic conditions; ranging from tropical in the south to temperate and alpine in the Himalayan north, where elevated regions receive sustained winter snowfall.These vast climatic variations may cause a difference in the phytoconstituents of plants.Previous studies state that phytochemical composition of plants is influenced by a variety of environmental factors including the geography, climate, soil type, sun exposure, grazing stress, seasonal changes etc.Present study focuses on to predict effects of geographical and climatic conditions on phytochemical diversity of Aloe vera from all the 6 different agro-climatic zones of India.Crude aqueous leaf extracts of different Aloe vera samples were also used to evaluate antioxidant potential of the plant.Samples were collected from 12 different sites covering 6 agro-climatic zones of India.Each zone had 2 sites.Geographical locations of collection sites along with their average temperature and rainfall are depicted in Table 1.Samples were collected in the months of Jan–Feb 2013 from naturally growing plants.Healthy leaves of Aloe vera were collected from five individual plants from each location and placed in sterile plastic bags.All samples were brought to the laboratory in an ice box and processed further.Fresh gel of the plant is used for skin treatments and dry form of the plant used locally in folk medicine as a laxative.The plant material was identified and authenticated by comparing the herbarium specimen available in Department of Genetics, M. D. University, Rohtak and voucher specimens were deposited in departmental herbarium.Sample collection pictures from different climatic zones are depicted in Plate 1.Leaves were washed with running tap water and dried in shade.Dried leaves were crushed using a willey mill.Aqueous extracts of different samples were prepared by cold percolation method.To 100 g of dried leaf mass 1 L water was added in a flask.It was incubated in an incubator shaker at 28°C for 48–72 h. at 180 rpm.Extract was filtered through Whatman filter paper No. 1.Filtrate was concentrated with help of rotary vacuum evaporator at 40°C.Dried mass was weighed.Table 2 shows the yield of extracts from different samples.The crude aqueous Aloe vera extracts were characterized using a Fourier transform infrared spectrophotometer.2 mg of the sample was mixed with 100 mg potassium bromide.Then, compressed to prepare a salt disc approximately 3 mm in diameter and the disc were immediately kept in the sample holder.FTIR spectra were recorded in the absorption range between 400 and 4000 cm− 1.Five generally used methods as; DPPH free radical scavenging assay, Hydrogen peroxide scavenging assay, Reducing power assay, metal chelating assay and β carotene-linoleic assay were used to assess the antioxidant potential of Aloe vera aqueous leaf extracts.Each experiment was done in triplicates and mean values were interpreted to conclude the results.Extracts ability to scavenge hydrogen peroxide was estimated by following the method of Ruch et al.The extracts were dissolved in phosphate buffer at a concentration of 1 mg/ml.1 ml of extract was added to 3.4 ml of phosphate buffer.600 μl of 400 mM H2O2 was added to the solution.The solution was kept at room temperature for 40 min.The absorption was measured at 230 nm.Ascorbic acid was used as control.The reductive potential of the extract was determined according to the method of Oyaizu.Extracts and standard Ascorbic acid were mixed with 2.5 ml of phosphate buffer and 2.5 ml potassium ferricyanide.2.5 ml of chloroacetic acid was added to the solution.The solution was centrifuged for 10 min at 3000 rpm.The upper layer of solution was separated and mixed with 2.5 ml of distilled water and 0.5 ml of FeCl3.Afterwards, absorbance was measured at 700 nm.The increase in absorbance of the reaction mixture indicated reducing power of different Aloe vera samples.The yield of aqueous extract varied from 3.0 to 4.1 g/100 g dried leaf mass used.Maximum yield was obtained from Punjab sample and least from M.P. sample.The FTIR spectrum was used to identify the functional groups of the active components based on the peak value in the region of infrared radiation.Frequency as; 3500–3200, 3300–2500, 3000–2850, 2830–2695 aromatics), 1500–1400aromatics), 1470–1450, 1370–1350, 1360–1290, 1335–1250, 1320–1000, 1300–1150 alkyl halides), 1250–1020, 950–910, 910–665, 900–675, 725–720, 700–610, 690–515 etc. were observed in different Aloe vera samples.The results of FTIR peak values and functional groups were represented in Table 3.The presence of various functional groups of different compounds was found.Transmission spectra of different samples are depicted in Plate 2.In present study 12 crude extracts of Aloe vera were investigated for their antioxidant potential by using different methods.All assays showed the antioxidative potential of Aloe vera extracts.All extracts showed significant antioxidant activity ranging from 50 to 66%.H.P., Punjab and J&K showed a high value in comparison to other samples.Comparatively least activity was observed for Gujarat, Telangana, Goa and Kerala samples.The reduction capability of DPPH radical is determined by the decrease in absorbance at 517 nm induced by antioxidants.Observed values for the assay were in range of 50.2 to 66.4%.Punjab sample showed highest antioxidant capacity.Least antioxidant capacity was observed from Telangana sample.Fig. 2 shows the percentages of free radical scavenging activity of different Aloe vera samples.Ascorbic acid used as a standard showed 96% radical scavenging activity.Reduction potential of aqueous extracts of Aloe vera in H2O2 assay ranged from 48.46 to 64.34%.Reducing activity of standard was 97%.Maximum potential was shown by H.P. sample which was followed by J&K and Punjab samples with almost similar values.Goa sample showed minimum activity.Reducing power of the extracts contributed significantly towards the antioxidant effects.H.P. and Punjab samples had high absorbance values that indicate their greater reductive potential and electron donor ability for stabilizing free radicals.W.B. and Goa samples showed same values but it was more than Kerala and less than M.P. samples.Absorbance value for standard was 0.9.Activity of all extracts and ascorbic acid with respect to their absorbance values is represented in Fig. 4.In the presence of other chelating agents, the complex formation is disrupted with the result that the red colour of the complex is decreased.Ferrous ions are also commonly found in food systems and considered as prooxidants.Ferrozine form complexes with the ferrous ion, generating a violet color.Observed values for the activity ranged between 44.6 to 56.26%.Himachal Pradesh sample showed high metal chelating activity.Haryana sample also showed good activity almost similar to J&K sample.W.B. and Goa samples also showed just similar but a little least values comparatively.Control ascorbic acid showed no activity.Activity of all extracts with respect to their absorbance values is represented in Fig. 5.Total antioxidant activities of the different aqueous extracts of Aloe vera were determined using β carotene-linoleic acid model system.Reductive potential of Aloe vera leaf extracts assayed by β carotene-lenoleic acid assay has been showed in Fig. 6.All extracts showed reductive potential.Antioxidant potential ranged from 50.14 to 63.22%.Maximum activity was observed for H.P. and Punjab samples and minimum for Telangana sample.β carotene as a standard showed 92% reduction potential.Genotype and environmental parameters are known to effect changes in phenotype among organism, resulting in inter-specific variation.Species with a wide geographic area generally have more diversity.Plant architecture, flowering, fruiting, phyto-chemical composition and in situ competition with other species are greatly associated with climate change.India is characterized by strong temperature and rainfall variations along with other different environmental and climatic fluctuations in different seasons.Phytochemical composition of plants is greatly influenced by different agro-climatic conditions."Therefore, there is a need to understand the effect of varying temperature, precipitation levels and different soil moisture and fertility, by growing the plants under such conditions to determine how variation in these geo-climatic factors will affect the plants' phenology, nutrient, antioxidant and secondary metabolites levels.There are numbers of secondary metabolites found in plants which contribute significant biological activities.Plant materials contain numerous types of antioxidants with varied activities.Aloe vera is a well-known medicinal plant and has been used from several years throughout the world for therapeutic as well as cosmetic purposes in different medicinal systems.Although, Aloe vera is native to Arabian Peninsula but grows all over the India, wildly in Maharashtra and Tamil Nadu states where as Andhra Pradesh, Gujarat and Rajasthan states are known for its cultivation.The plants are drought resistant and able to tolerate a wide range of climatic situations but hot humid conditions along with high rainfall are most suitable.Environmental temperature plays a significant role on antioxidant activity evaluation and it is more pronounced in cold weather.Therefore, plant material was collected in the winter season.Solvent type and extract preparation methods affect phytochemical concentration and different activities of plants.Methanolic extract induced the best extraction yield and more complex composition of phenolics.But, aqueous extracts are preferred over organic solvents as the latter confers some level of toxicity.Phytochemical diversity among different Aloe vera samples has been performed with the help of FTIR analysis.FTIR has proven to be a valuable tool for the characterization and identification of compounds or functional groups present in an unknown mixture of plants extract.In addition, FTIR spectra of pure compounds are usually so unique that they are like a molecular “fingerprint”.For most common plant compounds, the spectrum of an unknown compound can be identified by comparison to a library of known compounds.FTIR analysis in present study showed presence of various medicinally important phytoconstituents from different Aloe vera samples.Table 3 showed the presence of 20 different types of functional groups of bio-active compounds at different frequencies and rationalizes the use of plant as an herbal remedy.FTIR results also described that some rare compounds were present only in one or two samples rather than other commonly found groups.Therefore; the analysis of these chemical constituents would help in determining various biological activities of the plant.As concerned about antioxidant potential, high phenolic content of Aloe vera is responsible for its great antioxidant activity.There is a significant correlation between total phenolic content and antioxidant potential of the plants.Previous studies on A. vera state that the plant contains substantial amounts of antioxidants including α-tocopherol, carotenoids, ascorbic acid, flavonoids, and tannins.Potent antioxidative compounds like Aloe barbendol, Aloe emodin, barbaloin A and Aloe chrysone have been isolated from extracts of Aloe vera.Glutathione peroxidase activity, superoxide dismutase enzymes and a phenolic antioxidant were found to be present in A. vera gel.Biological activities of Aloe vera may be due to the synergistic action of these compounds, rather than from a single defined component.Our study emphasized the antioxidant potential of Aloe vera aqueous leaf extracts.Different assays were employed for assessing hydrogen donating ability, reducing ability, metal chelating activity etc. of different Aloe vera extracts.DPPH radical scavenging method is a simple and most popular method to evaluate the free radical scavenging ability of samples.DPPH radical scavenging activity was supposed to be due to hydrogen-donating ability of extracts.Hydrogen peroxide is toxic to cell when it gives rise to hydroxyl radicals in the cells.In the reducing power assay, the presence of reductants in the sample would result in the reduction of Fe3 + to Fe2 + by donating an electron.Metal chelating activity is based on chelation of Fe2 + ions by the reagent ferrozine, resulting in formation of a complex with Fe2 + ions.Hydroperoxides from linoleic acid oxidation can cause a great harm to cellular machinery.The presence of antioxidants hinders the extent of β carotene bleaching by acting on the linoleate-free radical.Our study confirms that Aloe vera extracts are quite effective in stabilizing all these free radicals and their initiated chains thereafter.Extracts of Highland and Semi-arid zones exhibited maximum antioxidant potential in the present observation.Tropical zone samples were least active as deduced by the antioxidant assays performed.Aloe vera can grow in almost all types of environmental conditions but there are several factors that can affect the quality and quantity of a particular constituent.Aloe vera is a cold sensitive plant.Studies conducted on plants in stress conditions showed higher production of flavonoids, anthocyanins and mucilaginous substances.An increase in unsaturated fatty acids is generally associated with cooler climates leading to production of antioxidants for a self-defense system against environmental stress.This statement supports our present findings suggesting pronounced effects of environmental temperature on different Aloe vera extracts.Lower temperature leads to higher production of phenolics and vice versa.Plants from hilly areas are thought to be more important from a nutritional and dietary point of view.Samples from colder regions and semi-arid region showed the enhanced antioxidant activity and this supports the statement that under stress more phytochemicals are produced in plants.Consequently, phytochemical content can vary with growing environmental conditions.From the present work, it could be concluded that agro-climatic locations along with temperature and rainfall have pronounced effects on the Aloe vera plant phytoconstituents and its antioxidant potential.However, there is still a need to investigate effects of soil properties and other related biotic and abiotic factors.Present work concludes that diverse geographic and climatic factors affect the phyto-constituents in different Aloe vera samples.FTIR spectroscopy is proved to be a reliable and sensitive method for detection of biomolecular composition.The present study showed that Aloe vera is a promising source which has enhanced antioxidant activity.Temperature as a climatic factor showed significant effect on antioxidant potential of Aloe vera samples collected from different regions of India.The study also demonstrated that antioxidant activity was higher in Aloe vera plants grown in Northern India in comparison to Southern India.Screening of Aloe vera plant antioxidant potential in relation to their activity from different climatic regions can help in selecting places for mass production of a plant to enhance its pharmaceutical and marketing values.Good antioxidant properties of the Aloe could be considered for applications in food, medicine and cosmetic industry and hope this will be helpful to formulate new and more potent antioxidant drugs of natural origin.The authors declare that they have no competing interests.
The aim of present study was to focus on the impact of spatial and different climatic conditions on phytochemical diversity and antioxidant potential of aqueous leaf extracts of Aloe vera collected from different climatic zones of India. Crude aqueous extracts of Aloe vera from different states varied in climatic conditions of India were screened for phytochemical diversity analysis and in vitro antioxidant activity. Phytochemical analysis was performed with the help of Fourier Transform Infrared (FTIR) Spectroscopy. DPPH free radical scavenging assay, metal chelating assay, hydrogen peroxide scavenging assay, reducing power assay and β carotene-linoleic assay were used to assess the antioxidant potential of Aloe vera aqueous leaf extracts. FTIR analysis in present study showed presence of various phytoconstituents from different Aloe vera samples. All antioxidant assays revealed that Highland and Semi-arid zone samples possessed higher antioxidant activity whereas Tropical zone samples possessed minimum. It could be concluded that different agro-climatic conditions have effects on phytochemical diversity and antioxidant potential of Aloe vera plant. This study demonstrated that antioxidant activity was higher in Aloe vera plants grown in Northern India in comparison to Southern India. Study also concluded that more phytochemicals are produced in plants under cold stress conditions. Aloe vera can be a potential source of novel natural antioxidant compounds.
426
Antibacterial, antiproliferative and antioxidant activity of leaf extracts of selected Solanaceae species
The Solanaceae family is one of the largest and most complex families of the Angiosperms.It includes 2500 species in 100 genera.Plants of this family exhibit a wide variety of secondary metabolites with different biological activities, which render them very important from economic, agricultural, and pharmaceutical point of view.In Sudan, the family is represented in nature by 9 genera and 30 species.Most of these species are known to possess diverse medicinal uses in the Sudanese traditional medicine.Roots of Solanum incanum L. are used as antiasthmatic and for the treatment of dysentery and snake bite.Fruits and leaves of S. nigrum L. are used to treat fever, diarrhoea, and eye diseases.Leaves of S. schimperianum Hochst are used to treat wounds.Roots and leaves of Withania somnifera and Physalis lagascae Roem.& Schult.are used as tonic, diuretic and as poultice for swellings.Plants of Solanaceae family are known to biosynthesize secondary metabolites with interesting biological activity such as hydroxycinnamic acid amides, steroid alkaloids, polyphenols and glycoalkaloids, presumably to protect themselves from damage by phytopathogens Rahman and Choudhary, 1998; Macoy et al., 2015).At higher concentrations glycoalkaloids are toxic in living organism.In contrast, some glycoalkaloids can also have beneficial effects from both ecological and human aspects having multidiscipline pharmacological applications as antibacterial, antioxidant, anti-inflammatory and anticancer.Investigations of native Sudanese Solanaceae species are mainly in botanical aspect and there were scanty data about their phytoconstituents and biological activity.In this sense, the objective of this study was to evaluate the possible antibacterial, antiproliferative and antioxidant activities of leaf extracts of S. incanum, S. schimperianum, S. nigrum, P. lagascae and W. somnifera as well as LC-ESI/MS profile of S. schimperianum.Leaves of the studied plant were collected from eastern Sudan, Erkowit region, in April 2012.Botanical identification and authentication were performed and voucher specimens have been deposited in Botany Department Herbarium, Faculty of Science, University of Khartoum, Sudan.Twenty grams of the dry powder from each plant material were macerated in methanol, at room temperature for 72 h.The extracts were evaporated under vacuum to dryness to obtain 1.9 g, 4.3 g, 4.8 g, 4.4 g and 4.1 g.Extraction of SGA was carried out according to the method described by Mohy-Ud-Din.Twenty grams of each powdered plant material were extracted with 5% aqueous acetic acid at room temperature for 30 min.Samples were then vacuum filtered through a Whatman no 4 filter paper.The polar fraction was then basified with NH4OH and extracted with water saturated n-butanol.Finally the solvent was evaporated with a rotavapor to dryness to obtain 2.1 g, 2.5 g, 2.9 g, 2.4 g and 2.0 g.Well-established methods were used.The microorganisms panel selected for study corresponds to some of the etnopharmacological uses.Standard strains of microorganism, obtained from Medicinal and Aromatic Institute of Research, National Research Center, Khartoum, were used in this study.The bacterial species used were the Gram-negative Escherichia coli and Pseudomonas aeruginosa, and Gram positive Bacillus subtilis and Staphylococcus aureus.The two-fold serial microdilution method described by Eloff was used to determine the MIC values for each extract against bacteria growth.All dilutions were prepared under aseptic conditions.A volume of 100 μL of the extracts dissolved in DMSO in duplicate was serially diluted two-fold with sterile distilled water and 100 μL of bacterial culture in MH Broth, corresponding to 106 CFU/mL, was added to each well.Gentamicin and amoxicillin were used as positive controls and DMSO as negative control.Plates were incubated overnight at 37 °C.Afterwards, 40 μL of 0.2 mg/mL of p-iodonitrotetrazolium violet was added to each well to indicate microbial growth.The colorless salt of tetrazolium acts as an electron acceptor and is reduced to a red colored formazan product by biologically active organisms.The solution in wells remains clear or shows a marked decrease in intensity of color after incubation with INT at the concentration where bacterial growth is inhibited.Plates were further incubated at 37 °C for 2 h and the MIC was determined as the lowest concentration inhibiting microbial growth, indicated by a decrease in the intensity of the red color of the formazan.The experiment was performed in triplicate.Anti-proliferative activities of each extract were evaluated with four cell lines established from human breast carcinoma samples and from human colon adenocarcinoma samples."HCT116 and HT29 cells were cultivated in Dulbecco's minimum essential medium supplemented with 10% fetal calf serum, 1% Penicillin/streptomycin and 2 mM l-glutamine.MCF7 and MDA-MB-231 cells were grown in RPMI medium with the same additives.Cells were routinely seeded at 100,000 cells/mL and maintained weekly in a humidified atmosphere of 5% CO2 at 37 °C.Cell viability assay was performed using the thiazolyl blue tetrazolium bromide procedure as described by Mosman.In brief, cancer cells were seeded in 96-well plate at 10,000 cells/well for HT29, MCF-7 and MDA-MB231 cells, at 5000 cells/well for HCT116 cells.Twenty four hours after seeding, 100 μL of medium containing increasing concentrations of each extract were added to each well for 72 h at 37 °C.Dried extracts were firstly diluted with DMSO to a final concentration at 50 mg/mL or 200 mg/mL.After incubation, the medium was discarded and 100 μL/well of MTT solution were added and incubated for 2 h. Water-insoluble formazan blue crystals were finally dissolved in DMSO.Each plate was read at 570 nm.IC50 was calculated using GraphPad Prism.Data are expressed as IC50 ± SD obtained from quadruplicate determinations of two independent experiments.As a control, we tested usually fenofibrate, a member of fibrate family, on HT29 and HCT116 cells, in order to confirm a moderate effect on HT29 cell viability whereas IC50 is > 50 μM when HCT116 cells are used.Antioxidant activity of extracts was estimated using in vitro 2,2-diphenyl-1-picrylhydrazyl scavenging radical method.Test samples were dissolved separately in methanol to get test solution of 1 mg/mL.Series of extract solutions of different concentrations were prepared by diluting with methanol.Assays were performed in 96-well, microtiter plates.140 μL of 0.6.10− 6 mol/L DPPH were added to each well containing 70 μL of sample.The mixture was shaken gently and left to stand for 30 min in dark at room temperature.The absorbance was measured spectrophotometrically at 517 nm using a microtiter plate reader.Blank was done in the same way using methanol and sample without DPPH and control was done in the same way but using DPPH and methanol without sample.Ascorbic acid was used as reference antioxidant compound.Every analysis was done in triplicate.The IC50 value was calculated from the linear regression of plots of concentration of the test sample against the mean percentage of the antioxidant activity.Results were expressed as mean ± SEM and the IC50 values obtained from the regression plots had a good coefficient of correlation,.The IC50 value was calculated from the linear regression of plots of concentration of the test sample against the mean percentage of the antioxidant activity obtained from triplicate assays.Results were expressed as mean ± SEM and the IC50 values obtained from the regression plots had a good coefficient of correlation,.Total phenols contents in the methanol extract were recorded using modified Folin Ciocalteu method.An aliquot of the extract was mixed with 5 mL Folin Ciocalteu reagent and 4 mL of sodium carbonate.The tubes were vortexes for 15 s and allowed to stand for 30 min at 40 °C for color development.Absorbance was then measured at 765 nm.Sample extracts were evaluated at a final concentration of 0.1 mg/mL.Total phenolic contents were expressed as gallic acid equivalents in milligram per gram sample.Mass analyses of the extract were carried out on a Q Exactive Plus mass spectrometer equipped with a heated electrospray ionization probe.The instrument parameters were as follows: spray voltage 3.5 kV, sheath gas flow rate 36, auxiliary gas flow rate 11, spare gas flow rate 1, capillary temperature 320 °C, probe heater temperature 320 °C, and S-lens RF level 50.All mass spectrometry parameters were optimized for sensitivity to the target analytes using the instrument control software program.Acquisition was acquired at Full-scan with DD-MS2 and PRM modes.Full-scan mass measurements were carried in positive mode with resolution of 70,000, AGC target at 1e6, maximum ion time at 50 ms and a scan range of 100–1000 m/z.For the DD-MS2 analysis, the following parameters were used: microscans 1, resolution 17,500, ACG target at 1e5, maximum IT of 30 ms, loop count 5, MSX count 1, isolation window 2 m/z, utilizing a stepped NCE at 10, 30 and 60.The targeted acquisition of the 12 HCAAs were carried out on parallel reaction monitoring mode, with the following instrument settings: microscans at 1, resolution at 35,000, AGC target at 5e5, maximum ion time at 100 ms, MSX count at 1, isolation window at 1.0 m/z, normalized collision energy at 20.Data acquisition and processing were carried out with Xcalibur 3.0 software.Methanolic extracts and SGAFs of S. incanum, S. schimperianum, S. nigrum, P. lagascae and W. somnifera leaves were evaluated for their in vitro antibacterial activity against Gram positive bacteria; S. aureus and B. subtilis, and Gram negative bacteria; E. coli, and P. aeruginosa and results are presented in Table 1.The sensitivity of tested Gram-positive and Gram-negative bacteria to different extracts was variable.The antibacterial activity of the SGAFs of S. incanum, S. schimperianum and P. lagascae against B. subtilis exceeded the effect of their corresponding methanolic extract by 7.8-fold, 3.9-fold and > 2-fold respectively while that of S. nigrum, P. lagascae and W. somnifera SGAFs against S. aureus was higher by 33.3-fold for the first and > 66.7-fold for the two last than their respective methanolic extracts.SGAFs of S. nigrum and W. somnifera against E. coli increased by > 66.7-fold and S. incanum and P. lagascae against P. aeruginosa increased by > 4-fold and > 32.3-fold than their respective methanolic extracts.In contrary, the antibacterial activity of SGAF of S. incanum against S. aureus and E. coli reduced by 8.4-fold and 2.1-fold respectively compared to their respective methanolic extracts.The same observation was obtained for P. lagascae against E. coli.However, the antibacterial activity of S. schimperianum against S. aureus, E. coli and P. aeruginosa was comparable in the two types of extracts, the same was true for the antibacterial activity of S. nigrum and W. somnifera against P. aeruginosa.These results supported the previously reported antibacterial activity for crude extracts of S. nigrum, S. incanum, S. schimperianum and W. somnifera.Methanolic extracts and SGAFs of S. incanum, S. schimperianum, S. nigrum, P. lagascae and W. somnifera leaf were tested, in vitro, for their potential anti-proliferative activity against HT29, HCT116, MCF7 and MDA-MB231 cell lines.Results of the methanolic extracts showed that only S. schimperianum leaf demonstrated interesting anti-proliferative activity against the four cell lines with IC50 values in the range of 2.69 to 19.83 μg/mL.The highest anti-proliferative activity was obtained against HT29 followed by HCT116, MDA-MB231 and MCF7 respectively.However, all other methanolic extracts showed anti-proliferative activity against the four cell lines with IC50 values > 50 μg/mL.The anti-proliferative activity of SGAFs was variable.Potent anti-proliferative activity was observed for the SGAF of W. somnifera leaf where the highest activity was obtained against HCT116 followed by MCF7, HT29 and MDA-MB231 respectively.Ichikawa et al. reported that, withanolides inhibit cyclooxygenase enzymes, lipid peroxidation, and proliferation of tumor cells through the suppression of nuclear factor-κB and NF-κB-regulated gene products.P. lagascae leaf SGAF displayed the second highest anti-proliferative activity where the highest anti-proliferative activity was obtained against HCT116 followed by MCF7, MDA-MB231 and HT29 respectively.This is the first time to illustrate the antiproliferative activity of P. lagascae, previous study on P. crassifolia demonstrated its potent and selective cytotoxicity against prostate cancer cells.In contrast, to the methanolic extract, the SGAF of S. schimperianum leaf exhibited anti-proliferative activity with IC50 values higher than those obtained for the methanolic extract.S. incanum and S. incanum leaves SGAFs were less active showing anti-proliferative activity against the four cell lines with IC50 values > 50 μg/mL.Ding et al. found that SGAs exhibited antitumor activity and induced apoptosis on human gastric cancer MGC-803 cells.They further stated that the number and type of sugar and the substitution of a hydroxyl on steroidal alkaloid backbone play an important role in the anti-proliferative activity.The methanolic extracts and SGAFs of S. incanum, S. schimperianum, S. nigrum, P. lagascae and W. somnifera leaves were evaluated for their in vitro antioxidant activity using DPPH and ATBS assays and results are presented in Table 3.Generally, it was clear that the SGAFs of all species demonstrated higher scavenging activities than their corresponding methanolic extracts and the scavenging of the ABTS radical was found to be generally different than that of DPPH radical.The SGAFs increased the DPPH and ABTS scavenging capacity respectively by 1.1- and 13.5-fold for S. incanum, 1.6- and 1.7-fold for S. nigrum, 2.4- and 8.2-fold for P. lagascae and 1.9- and 34-fold for W. somnifera comparatively with the methanolic extracts.Furthermore, S. schimperianum displayed the strongest antioxidant activity in both assays where a sharp increase by 45- and 140-fold was observed for SGAFs.Several studies have reported variations in the biological activities of extracts prepared using different extraction techniques.Zheng and Wang and Jimoh et al. reported that factors like stereoselectivity of the radicals or the solubility of the extracts in different testing systems affect the capacity of extracts to react and quench different radicals.Wang et al. found that some compounds which have ABTS scavenging activity did not show DPPH scavenging activity.The total polyphenolic contents were expressed as mg gallic acid equivalent/g of dry material and results are listed in Table 4.The amount of total phenolics, varied in the studied species and ranged from 0.20 to 0.48 mg GAE/g.P. lagascae had the highest polyphenolic content followed by S. schimperianum and W. somnifera while S. nigrum and S. incanum had the lowest content.Previous studies demonstrated that the antioxidant activity of S. incanum leaf, S. nigrum leaf and W. somnifera leaf was mainly attributed to the phenolic content of extracts."In this study, the correlation coefficient between the antioxidant capacities and the total phenols of the methanol extracts was also determined using Pearson's correlation coefficients.The R2 between the antioxidant capacities obtained from DPPH and ABTS assays were 0.3727 and 0.1757 respectively.Thus, it was clear that the antioxidant capacity of these plants did not correlate with their phenolic content, which suggested that the phenolic compounds could not be the main contributors to the antioxidant capacities of leaves of these plants.Based on the results obtained from antiproliferative assay, leaf methanolic extract of S. schimperianum was subjected to LC–MS analyses.Twelve known hydroxycinnamic acid amides were detected and N-caffeoyl agmatine appeared with the highest intensity.Furthermore, the positive HR-ESI-MS Full-scan mass spectrum revealed the presence of several isobaric, high intensity quasi-molecular ions at m/z 207.17998, 208.18777, and 216.18523, appearing predominantly as doubly charged 2 + ions.The mass errors between the theoretically calculated and measured masses ranged from 0.50 to 5.00 ppm.Based on high-resolution accurate mass measurements and analysis of their isotopic patterns, the elemental composition of the three abovementioned ions were determined as C27H44ON2, C27H46ON2 and C27H46O2N2, respectively.In the MS2 of all three + parent ions at m/z 413.35229, 415.36771 and 431.36389, a subsequent loss of ammonia and water]+) was observed.The elemental composition, together with the MS2 data permitted us to assign the abovementioned chemical compositions to 3-amino steroid alkaloids.There have been several reports in the literature describing isolated 3-amino steroid alkaloids from the genus Solanum, having an elemental composition of C27H46ON2, as Solacallinidine and Soladunalinidine.Moreover, another isobaric compound, the 3β-amino steroid alkaloid Solanopubamine, was recently isolated by Al-Rehaily et al. in high quantity from the aerial parts of S. schimperianum.This data allowed us to tentatively assign one of the discovered ions having an elemental composition C27H46ON2 as Solanopubamine.The ions with elemental composition C27H44ON2 can be putatively ascribed as dehydro derivatives of the 3-amino steroid alkaloids mentioned above.The ions having elemental composition C27H46O2N2 can be assigned tentatively as Solanocapsine, found previously in S. pseudocapsicum.A variety of HCAAs has been found throughout the genus Solanum and was found to represent the main phenylpropanoid constituents in 12 Solanum species.However, the HCAAs reported in this study were also detected previously from the roots where, N ε-feruloyllysine, as well as HCAAs of agmatine, cadaverine and sinapoyl putrescine were reported in genus Solanum for the first time.HCAAs have been reported to possess good activity against wide range of microbial pathogens.Feruloyl dopamine, feruloyl tyramine and feruloyl tryptamine were found effective against S. aureus 209 and S. pyogenes with MIC values between 190 and 372 μM.Nε-feruloyl lysine exhibited MIC of 349 μM against S. aureus 3359 and ATCC 6538 P.Solanopubamine was found to exhibit good antifungal activity against Candida albicans and C. tenuis with MIC of 12.5 μg/mL.However, the steroidal alkaloid detected in this study might not be the main contributor to the antiproliferative activity of this species.Previous study showed that solanopubamine was found inactive against several cancer cell lines.A derivative of solanocapsine, O-methylsolanocapsine, isolated from S. pseudocapsicum leaf was found to possess cytotoxic properties against HeLa cell lines, with IC50 values of 39.90 ± 0.03 and 34.65 ± 0.06 by MTT and SRB assays, respectively.On the other hand, several data in the literature are reporting the antiproliferative activity and mechanism of action of hydroxycinnamic acids and derivatives against several cancer cell lines.Generally, the antibacterial and free radical scavenging activities of SGAFs of leaf of investigated Solanaceae species was higher than those obtained in their corresponding methanolic extracts.SGAFs of leaf of S. schimperianum displayed the strongest antioxidant activity in both assays.Based on the antiproliferative activity of the fractions, further investigation as for S. schimperianum was warranted.HCAAs and steroid alkaloids were identified by LC–MS in methanolic extract of S. schimperianum.Thus, the results of this study suggested that Solanaceae plants from Sudan could be a good natural source of chemopreventive and/or chemotherapeutic agents.
Plants belonging to the Solanaceae family are generally used in Sudanese traditional medicine for the treatment of different ailments. This study aimed at the evaluation of in vitro antibacterial, antiproliferative and antioxidant activities of methanolic leaf extracts and steroidal glycoalkaloids fractions (SGAFs) of Solanum incanum L., S. schimperianum Hochst, S. nigrum L., Physalis lagascae Roem. & Schult. and Withania somnifera (L) Dunal. Methods: Antibacterial activity of the methanolic extracts and SGAFs was determined against two Gram-positive and two Gram-negative bacteria by microdilution method. Anti-proliferative activity was determined against human cell lines (MCF7 and MDA-MB-231, HT29 and HCT116) by the thiazolyl blue tetrazolium bromide (MTT) procedure. Antioxidant activity was evaluated by diphenyl 2 pycril hydrazil (DPPH) and 2,2′-azino-bis 3-ethylbenzthiazoline-6-sulphonic acid (ABTS) scavenging radical methods. Methanolic extract of S. schimperianum was analyzed using liquid chromatography coupled to an Orbitrap mass spectrometer with an electrospray ionization source (LC–MS). Results The sensitivity of Gram-positive and Gram-negative bacteria to each extract was variable (MIC values in the range of 15–> 1000 μg/mL). Only the methanolic extract of S. schimperianum leaf demonstrated interesting anti-proliferative activity against the human cell lines tested with IC50 values in the range of 2.69 to 19.83 μg/mL while the highest activity from the SGAFs was obtained from W. somnifera leaf with IC50 values in the range of 1.29 to 5.00 μg/mL. In both assays the SGAFs of all species demonstrated higher scavenging activity than their respective methanolic extracts. The SGAF of S. schimperianum displayed the strongest antioxidant activity in both assays with IC50 value 3.5 ± 0.2DPPH and 3.5 ± 0.3ABTS μg/mL. The correlation coefficient (R2) between the antioxidant capacities and the total phenolic contents of the methanol extracts suggested that the phenolic compounds could not be the main contributor to the antioxidant capacities of leaves of these plants. Twelve known hydroxycinnamic acid amides (HCAAs) were tentatively identified from the methanolic extract of S. schimperianum leaf and N-caffeoyl agmatine appeared with the highest intensity. Moreover, the presence of steroid alkaloids was also detected and the presence of solanopubamine and solanocapsine as well as dehydro derivatives of the 3-amino steroid alkaloids was suggested.
427
Topical cyclodextrin reduces amyloid beta and inflammation improving retinal function in ageing mice
Ageing is associated with cellular decline which is partly linked to metabolic rate.The outer retina has the highest metabolic demand in the body required to maintain the oxygen demanding photoreceptor population.Here with age there is progressive accumulation of extracellular material including neurotoxic amyloid beta, lipids and proteins that are inflammatory such as complement."These accumulate on Bruch's membrane restricting the exchange of metabolic nutrients between the outer retina and its blood supply.In mice these deposits are relatively linear along BM.In humans, deposits are focal and are called drusen.These are a key risk factor for age-related macular degeneration when they accumulate in the central retina.With progressive deposition and inflammation 30% of the photoreceptor population is lost in both humans and rodents in normal ageing.In humans, retinal ageing can develop into AMD where progressive inflammation and deposition result in central retina atrophy.Mice lack this area of specialisation and do not develop retinal atrophy but do suffer from similar deposition, inflammation and cell loss across the retina.AMD is the leading cause of blindness in those over 65 years in the West and is growing rapidly as populations age.In 50% of cases it is linked to immune vulnerability being associated with polymorphisms of complement genes.Currently, there is no cure for this neurodegenerative disease, although systemic immunotherapeutic approaches have tried to reduce retinal Aβ load.However, topical drug administration has largely been ignored as it was seen as unlikely to be effective due to drug dilution before it reached the retina.This assumes that drugs would have to pass through the anterior eye and vitreous before entering the retina.However, penetration of the drug may be obtained via the conjunctiva and sclera into the retina.Cyclodextrins are a family of cyclic polysaccharide compounds with a hydrophilic shell enclosing a hydrophobic cavity.This structure allows them to form water–soluble complexes with otherwise insoluble hydrophobic compounds.This has led to their utilisation as carriers to increase the aqueous solubility and stability of hydrophobic drugs.They have undergone extensive safety studies and are approved by the Food and Drug Administration for pharmaceutical use and dietary supplements.Topical administration results in their rapid retinal accumulation, presumably entering the eye via the conjunctiva."Recently, CDs systemic delivery has shown efficacy in an Alzheimer's mouse model reducing the size of Aβ plaques in the brain and upregulating genes associated with cholesterol transport and Aβ clearance.Further, systemic delivery significantly reduces lipofuscin deposits in the retina, which are an age related lipid rich pigmented deposit that accumulates in the retinal pigmented epithelium.CDs are known to bind to cholesterol and at high concentrations, they serve as a cholesterol sink.At low concentrations, CDs act as a cholesterol shuttle, transporting it between membranes.Hence, they can clear cholesterol which is known to be deposited on BM and whose presence has been linked to AMD.Here, we ask whether topical CDs delivery has the ability to erode Aβ and reduce inflammation in the aged mouse retina and what impact this has on retinal function.This was based on our prior observation that CDs had the ability to enter the retina via the conjunctiva.We explore this in normal aged mice but also ask if it has similar abilities in terms of Aβ and inflammation alone in aged complement factor H mice that have been proposed as a murine model of AMD as it shares a genotype with 50% of AMD patients.While there remain significant questions regarding mouse AMD models due to the absence of a macular, the aged Cfh−/− mouse does experience elevated deposition and inflammation and has reduced photoreceptor numbers and compromised visual function.8-9 months old C57BL/6 and 6–7 months old Cfh−/− mice which were backcrossed onto C57BL/6 genetic background for more than 10 generations were used.Animals were housed under a 12/12 light dark cycle with access to food and water ad libitum.All animals were used with University College London ethics committee approval that conformed to the United Kingdom Animal License Act 1986.UK Home Office project license.C57BL/6 mice were treated with 3 μl of 10% 2-Hydroxypropyl-β-cyclodextrin in phosphate buffered saline as eye drops bilaterally 3 times daily for 3 months.Controls were untreated.The Cfh−/− mice were divided into 3 groups.The first was treated with 3 μl of 10% 2-Hydroxypropyl-β-cyclodextrin as above as eye drops 3 times a day for 3 months.The second of Cfh−/− mice was left untreated.The third group was treated with 3 μl of 10% β-CD as above as eye drops 3 times a day for 3 days per month for 3 months.After treatment C57BL/6 animals were given full field flash ERG to assess retinal function in response under scotopic and photopic conditions similar to Hoh Kam et al. using the ColorDome Ganzfeld ERG.Mice were dark-adapted overnight for scotopic measurements and anaesthetised with 6% Ketamine, 10% Dormitor, and 84% sterile water at 5ul/g intraperitoneal injection.Pupils were dilated prior to recordings.Ground and reference subdermal electrodes were placed subcutaneously near the hindquarter and between the eyes respectively and the mouse placed on a heated pad.Recording gold electrodes were placed on the cornea.ERG was carried out under scotopic conditions for both eyes simultaneously, with increasing stimulus strengths using a 6500 K white light at; 3.5 × 10−6, 3.5 × 10−5, 3.5 × 10−4, 0.03, 0.3, 2.8 and 28.1 cd s/m2.After the scotopic series mice were adapted to a 20 cd/m2 background for 20 min.Then photopic responses to white light flash stimuli of 0.3, 2.8, 28.1 and 84.2 cd s/m2 were recorded with a background light of 20 cd/m2.An average of 20–25 readings were taken for each intensity.Statistical differences between groups were evaluated by using random ANOVA.After ERGs, C57BL/6 mice were culled by cervical dislocation as were the Cfh−/− mice from which recordings were not undertaken.Eyes were collected and fixed in 4% paraformaldehyde in phosphate buffered saline, pH 7.4, for 1 h, cryopreserved in 30% sucrose in PBS and embedded in optimum cutting temperature compound.10 μm cryosections were thaw-mounted on a slide and incubated for 1 h at room temperature in a 5% Normal Donkey serum in 0.3% Triton X-100 in PBS, pH 7.4.This was followed by an overnight incubation with either a mouse monoclonal antibody to Aβ 4G8, a mouse monoclonal antibody to RPE65, both were conjugated with an Alexa Fluor 568, or a rat monoclonal antibody to complement C3b diluted in 1% Normal Donkey Serum in 0.3% Triton X-100 in PBS.For the Cfh−/− mice, we used a goat polyclonal to complement C3.After primary antibody incubation, sections were washed several times in 0.1 M PBS then slides stained for active C3b were incubated in a secondary antibody, donkey anti-rat conjugated with Alexa Fluor 488 and for a donkey anti-goat conjugated with Alexa Fluor 488 for C3, made up in 2% Normal Donkey Serum in 0.3% Triton X-100 in PBS at a dilution of 1:2000 for 1 h at room temperature.Negative controls were undertaken by omitting the primary antibody.After secondary antibody incubation, sections were washed and nuclei stained with DAPI.Slides were then washed in 0.1 M PBS followed by washes in Tris buffered Saline.Slides were mounted in Vectashield and coverslipped.For lipid detection retinal sections were stained with a saturated solution of Sudan Black B in 70% ethanol for 1 h at room temperature and then washed in several changes of distilled water.Slides were mounted in glycerol and then coverslipped.Eyes were dissected on ice and the retina and RPE-choroidal tissues were snap frozen in liquid nitrogen.Protein was then extracted by homogenising the samples in 2% SDS with protease inhibitor cocktail, and centrifuged at 13,000 ×g.The supernatant was transferred to a new microcentrifuge tube and will be used for Western blots of C3 and RPE65.Aβ was extracted from the resultant pellet with 70% formic acid and the mixture was then centrifuged at 13,000 ×g.The supernatant was then transferred to a new microcentrifuge tube and the pellet discarded.The formic acid in the supernatant was evaporated using a speed-Vac concentrator and the protein pellet was reconstituted in 10% dimethyl sulfoxide in 2 mol/L Tris–HCl.Protein concentration was measured with an absorbance of 595 nm and Bovine Serum Albumin was used as a standard protein concentration.Equal amounts of proteins were separated by a 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis and electrophoretically transferred onto nitrocellulose membranes.The nitrocellulose membranes were pre-treated with 5% non-fat dried milk in 1 M PBS for 1 h and incubated overnight at 4 °C with either a goat polyclonal antibody to C3, a rabbit monoclonal to RPE65, a mouse monoclonal antibody to Aβ 4G8 or a mouse monoclonal to α-tubulin followed by several washes in 0.05% Tween-20 in 1 M PBS.The membranes were then incubated with the respective secondary antibodies; rabbit anti-goat HRP conjugated, goat anti-rabbit HRP conjugated and goat anti-mouse HRP conjugated for 1 h. Immunoreactivities were visualised by exposing x-ray films to blots incubated with ECL reagent.Total protein profile was determined by staining blots with Ponceau S solution to check the transfer efficiency and quantification.Protein bands were then photographed and scanned.The absolute intensity of each band was then measured using Adobe Photoshop CS5 extended.Fluorescence images were taken in JPEG format at ×400 using an Epi-fluorescence bright-field microscope.Images were montaged and the integrated density, which is the product of the area chosen and the mean grey value, were recorded using Adobe Photoshop CS5 extended."The lasso tool was used to draw a line all the way around the RPE and Bruch's membrane interface to measure the amyloid beta, RPE65, C3 and the C3b expression in this area.Scanned images of the immunoblots were inverted to grayscale format and the mean gray value was measured for each protein band by using the lasso tool to draw a line all the way around the edges of the band using Adobe Photoshop CS5 extended.The absolute intensity was calculated by multiplying the mean gray value and the pixel value.The protein bands were quantified and their ratios to alpha tubulin were calculated and plotted into graphs."A Mann–Whitney U test was used to compare groups and a one way ANOVA with post hoc analysis with Dunn's multiple comparison test was used for the three groups.Data was analysed using Graph pad Prism, version 5.0.We administered β-CD as single eye drops for 3 months in old C57BL/6 mice in which Aβ deposition and inflammation were established.All β-CD-treated mice had a significant reduction of around 65% in Aβ deposition on BM when immunostained tissue was examined compared with controls.Further, while Aβ deposition was relatively focal on BM in the untreated group, in β-CD-treated mice its distribution was clearly diffuse, which may be attributable to the process of clearance.Western blot was also used to quantify Aβ deposition in the retina and RPE of both the β-CD-treated and untreated mice.The results showed that β-CD decreased retinal Aβ levels markedly but this was not significant.To determine if reductions in pro-inflammatory Aβ were associated with a decreased inflammation, we immunostained adjacent section for active complement C3.There was a significant decline of around 75% in C3b on BM in β-CD-treated mice compared with controls.Western blot results showed a decrease in the level of C3 in the β-CD-treated mice compare to controls, but this did not reach statistical significance.Hence, topical administration of β-CD reduced Aβ deposition and inflammation in the aged outer retina.There is evidence from amphibians that β-CD improves the visual cycle, which is an enzyme pathway where opsin is recycled.This involves the removal of all-trans retinol, a potentially toxic element whose accumulation results in vulnerability to light-induced photoreceptor damage.Elevated all-trans retinol is also linked to Stargardt disease, a rare early onset form of AMD.In frogs, β-CD clears all-trans retinol in a dose-dependent manner.To determine if CDs impact on the visual cycle in aged mice we immunostained sections for retinal pigment epithelium specific protein 65, which is a protein expressed in the retinal pigment epithelium that plays a critical role in recycling visual pigments.There was an approximate 30% increase in RPE65 expression in β-CD-treated mice compared to controls.Differences were not simply related to intensity of labelled RPE65, but also to its distribution.In treated animals expression was continuous along the RPE, but in controls label was patchy with gaps that implied its expression was low in some RPE cells.To further quantify RPE65 in the RPE and retina, Western blot analysis was undertaken.The results revealed that there is a significant increase in the level of RPE65 in the β-CD-treated mice compare to the untreated.This confirms the result obtained with immunostaining and shows that β-CD improves the visual cycle.Ageing is also associated with an accumulation of retinal lipids and it has been argued that this may contribute to AMD.Phospholipids are produced in photoreceptors and subsequently deposited in the RPE as lipofuscin in the daily shedding of outer segments and their phagocytosis potentially leading to RPE dysfunction and photoreceptor death.To reveal phospholipids and lipids, sections from both treated and controls were stained with Sudan Black B, which is a histochemical lipids stain.It was not possible to assess staining on BM because melanin obscured the staining patterns.However, staining on outer segments was clear and there were marked qualitative differences between β-CD-treated and control mice with reduced staining in the former group.Significant reductions in Aβ and inflammation along with increased RPE65 expression may be associated with improved retinal function.This was measured with scotopic and photopic ERG recordings.There are marked reductions in the amplitude of the individual ERG waves with age and pathology.Clear significant improvements in amplitudes of the aged ERG waves of both rod and cone function were found in treated mice but no difference was seen in the latencies.Clear significant improvements were marked at higher luminance where an increase of approximately 28% was shown in the scotopic a-wave amplitude at 28.1 cd s/m2 and 25% in the scotopic b-wave amplitude.A significant 20% improvement in the photopic b-wave was also seen in treated mice at the highest luminance, 84.2 cd s/m2.Hence, both the a- and b-waves under both scotopic and photopic conditions were significantly improved in β-CD-treated mice, suggesting that β-CD treatment improved both rod and cone function and hence improved visual function over a large dynamic range.Having established that β-CD is effective in reducing features of outer retina ageing we ask two further questions.First, is β-CD also effective in treating aged Cfh−/− mice in terms of Aβ and inflammation as it is in normal ageing when given the same dosage patterns?,Cfh−/− mice suffer from premature retinal Aβ and inflammation accumulation and share a genotype with 50% of AMD patients.Second, if so, can this be achieved with a much lower dosing pattern?,Hence, we only dosed at 10% of the level for the long term treatment in C57BL/6 mice with a total of 27 eye drops per eye over 3 months.Fig. 4 shows data from both long term treatment in Cfh−/− mice and also short term treatment where dosing was reduced by 90%.In both groups, Aβ was significantly reduced on BM compared to controls.As with the C57BL/6 mice, Aβ removal in both Cfh−/− groups resulted in a patchy distribution of Aβ on BM.C3 expression was also significantly reduced in the Cfh−/− mice given the same high dosage pattern as the C57BL/6 animals.However, reductions in C3 expression along BM of the short term treatment group were not significant.Hence, reducing dosage by 90% was effective in reducing Aβ but not C3.This may be because Aβ is pro-inflammatory and reductions in inflammation may lag behind Aβ removal.This study shows that 3 months topical treatment with β-CD has a significant impact on the aged mouse outer retina, reducing Aβ and C3 expression along with lowering lipid levels.This treatment also increased RPE65 expression and improved retinal function in aged mice.These results were reflected in treatment of Cfh−/− mice where our aims were more limited, only targeting Aβ and C3.β-CD remained effective here even when dosing was reduced by 90%, but only in terms of Aβ deposition.β-CD therapeutic abilities are probably related to its hydrophobic internal structure and hydrophilic outer surface, which increase the interaction and solubility of materials such as lipids and Aβ.CDs have been used previously with success in Alzheimer mouse models and to clear Aβ in the brain and in the retina to reduce lipofuscin, but in both studies administration has been systemic.However, it is known that topical administration results in 60% delivery to the retina, compared with only 40% when given systemically.Hence, systemic delivery is an inefficient route that does not target selectively.In spite of this, CDs have been used widely as vehicles for hydrophobic drug delivery.They are safe, FDA approved and economic.In mice, Aβ and C3 increase progressively with age on BM and this probably reduces outer retinal perfusion and may increase hypoxia.The same is true of lipid deposition.Lipid is a constituent of outer segments and it is likely that this material accumulates with age as phagocytosis efficacy probably declines.While we show reductions in each of these elements, our study is not the first demonstration of the impact of β-CD on deposition in the aged mouse outer retina.It has been shown that β-CD delivered systemically also reduces lipofuscin that accumulates with age.Each of these deposited materials eroded by β-CD have been implicated in AMD."Our results are similar to those obtained by Yao, J. et al. who used systemic β-CD delivery in an Alzheimer's mouse model to reduce brain Aβ deposition and microgliosis.The mechanism by which cyclodextrins reduce Aβ may be related to their modulation of cellular cholesterol, which has a complex relationship with amyloid precursor protein and Aβ metabolism.APP and β- and γ-secretases, which are enzymes involved in Aβ metabolism, are colocalised in cholesterol rich lipid rafts.Hence, lowering cholesterol directly may affect APP processing and reduce Aβ production.Yao et al. have also shown that β-CD increases expression of genes critical for lipid transportation, notably ABCA1 involved in increasing apolipoprotein E lipidation and improving Aβ clearance.β-CD treatment has a wider role than reducing deposition as it significantly improves retinal function and the visual cycle, both of which decline with ageing.β-CD removes lipofuscin bisretinoids that are by-products of the retinal pigmented epithelium resulting from phagocytosing and these are toxic to RPE cells and as such impact on the visual cycle.Hence, their removal is likely to improve RPE cell function which is critical for the cycle.But more importantly, Johnson et al. revealed that β-CD efficiently removes all-trans retinol from frog rod photoreceptors.During the visual cycle, 11-cis retinal chromophore is photoisomerased into all-trans retinal, which is then reduced to all-trans retinol in outer segments."When this accumulates it is potentially toxic and this has been linked to Stargardt's disease.In the frog, β-CD facilitates removal of this material and hence improves regeneration of 11-cis isomers.This may potentially explain the elevated RPE65 expression in β-CD-treated aged animals, as RPE65 is involved in the conversion of all-trans retinol to 11-cis retinal during phototransduction.Given the improved visual cycle and reductions in Aβ and inflammation it is not surprising that the ERG improved.However, ERGs are relatively insensitive and significant changes in amplitude of the respective waves require disproportionately large changes in underlying biology before they are reliably detected.An example of this insensitivity comes from the finding that ERG thresholds are 2–3 log units less sensitive than those based on psychophysical measures.Hence, the true impact of β-CD on improved psychophysical visual function may be more significant than reported here.Our improved ERGs were largely confined to higher luminance.There may be many reasons for this including differential saturation.However, recently it has been shown that age-related cone loss in mice occurs before rod loss.It is present within the first year of life while rod loss is primarily a feature of the second year.Hence, as our animals were around a year of age when sacrificed, cones may have been in a more vulnerable state than rods.Further, there is a growing clinical evidence that cones may be particularly vulnerable to inflammation and hence may have improved function follow CD treatment.However we do not have proof for this as an explanation and the specific underlying mechanism has yet to be revealed.There are two additional reasons to think that β-CD may have potential in dealing with problems related to retinal ageing.First, we show that β-CD is effective in aged Cfh−/− mice.These are regarded as an AMD model as they have similar genotype to 50% of AMD patients and suffer excess Aβ deposition and inflammation in the outer retina that is associated with advanced photoreceptor loss.Second, we show that significant reductions in Aβ deposition in the outer retina of these mice can be obtained with dosing at only approximately 10% of the level undertaken in the C57BL/6 mice used more extensively here.While we did reveal significant reductions in Aβ, our inability to show a significant reduction in inflammation with this low dosing may be due to time.Had the treatment been extended it is possible that inflammation would have declined in response to reduced pro-inflammatory Aβ.A number of studies have shown reductions in retinal Aβ and inflammation with systemic immunotherapies in mice as a potential route to an AMD therapy.A key study has also shown improved ERG b-wave function in transgenic mice.However, some models are relatively complex and systemic immunotherapies are inherently problematic.β-CD has been used extensively in humans for many years.Our results imply that the effects in their use may be down to multiple factors.These include drugs that they carry and also what CD might do once they have deposited such drugs.Hence CDs may not be a reliable vehicle alone without such qualifications.Given the data presented here using topical administration it is possible that it could be used in AMD patients.While there are very considerable retinal differences between mice and human that raise serious questions about the validity of the mouse model, this has to be balanced against other key issues.These include the lack of a realistic alternative, the pressing nature of the disease and the safe and economic route that β-CD offers.
Retinal ageing results in chronic inflammation, extracellular deposition, including that of amyloid beta (Aβ) and declining visual function. In humans this can progress into age-related macular degeneration (AMD), which is without cure. Therapeutic approaches have focused on systemic immunotherapies without clinical resolution. Here, we show using aged mice that 2-Hydroxypropyl-β-cyclodextrin, a sugar molecule given as eye drops over 3 months results in significant reductions in Aβ by 65% and inflammation by 75% in the aged mouse retina. It also elevates retinal pigment epithelium specific protein 65 (RPE65), a key molecule in the visual cycle, in aged retina. These changes are accompanied by a significant improvement in retinal function measured physiologically. 2-Hydroxypropyl-β-cyclodextrin is as effective in reducing Aβ and inflammation in the complement factor H knockout (Cfh<sup>-/-</sup>) mouse that shows advanced ageing and has been proposed as an AMD model. β-cyclodextrin is economic, safe and may provide an efficient route to reducing the impact of retinal ageing.
428
Dose- and time-dependent effects of genipin crosslinking on cell viability and tissue mechanics - Toward clinical application for tendon repair
Crosslinking has long been employed to augment the mechanical properties of collagen-based implants for the repair or replacement of musculoskeletal and cardiovascular tissues .The physiological environments of these systems can expose implants to extreme physical demands that include high mechanical stresses, high mechanical strains and/or highly repetitive loading.Such loading regimes can overwhelm even native tissues, a fact that is evidenced by high clinical rates of connective tissue disease and injury .Although tissue and biomaterial crosslinking strategies, traditionally using glutaraldehyde, have almost exclusively focused on ex vivo chemical treatments of an implant prior to its application, in vivo exogenous crosslinking has more recently been pursued.In this paradigm, the collagen matrix of injured tissue is bolstered by judicious and targeted application of low-toxicity crosslinkers.The idea here is to augment a tissue at the margins of a damaged region, arrest mechanically driven tissue degeneration and possibly provide a foothold for eventual recovery of tissue homeostasis.The use of ultraviolet radiation to augment the biomechanical properties of connective tissues within the eye has by now become a common clinical treatment of keratoconus , a disorder where local matrix weakness leads to tissue bulging under ocular pressure.Proof of concept studies using low toxicity crosslinkers in orthopedic applications are also emerging .Of the known low-toxic collagen crosslinking agents, one of the best characterized is genipin, a naturally occurring organic compound derived from the fruit of the gardenia plant.At acidic and neutral pH, GEN reacts with primary amines of biopolymers and forms mono- up to tetramer crosslinks .With increasingly basic conditions, GEN further undergoes ring-opening self-polymerization with increasing polymer length prior to binding to primary amines .With increasing polymer length, amine reactions with GEN slows, in turn leading to less reduced enzyme digestibility and swelling by GEN .The feasibility and benefit of employing GEN as an alternative to higher toxicity crosslinkers like glutaraldehyde has been demonstrated in a range of applications, including heart valves , pericardial patches , conduits for nerve growth guidance , scaffolds for tissue-engineered cartilage and decellularized tracheal transplantation , and as a more general application to augment the strength and degradation properties of collagen-based gels .Our own efforts have demonstrated in an in vitro model that application of GEN can arrest the progression of tendon lesions that are characteristic of acute injury , and could potentially be of benefit in addressing this urgent and unmet clinical need.Tendon injuries are also widespread in equine athletes, with much pathophysiological similarity to tendon injury in man .Using equine tendon, we demonstrated that immersion in a high concentration GEN solution could significantly recover post-injury tendon function, bringing it to a level similar to that of uninjured controls.This functional recovery was reflected in reduced tissue strains at a given mechanical stress, increased tissue elasticity and the arrest of mechanical damage accumulation during high-cycle dynamic loading.Although functional efficacy of these GEN treatments was clearly demonstrated, the physiological implications of the treatment were not investigated.More specifically, it was unclear what effect GEN treatment has on resident cell populations, and whether GEN concentrations at levels reported by others as non-cytotoxic could be sufficient to elicit recovery of mechanical integrity.This information is critical to guide further development of GEN based clinical approaches to in situ tissue augmentation of dense collagen-based connective tissues, including tendon.This information is also necessary to guide the design and development of delivery systems that can provide targeted tissue augmentation without unacceptable collateral damage to peripheral tissues.The first aim of the present study was to investigate in vitro dose-dependent tendon cell toxicity, exploring effects of both treatment time and concentration.Previous studies have similarly investigated a range of other cell types , with variable results indicating that tissue specific investigation of relevant cell phenotypes is warranted.The present series of studies focus on tenocytes as a class of fibroblastic cells derived from dense collagen connective tissue that has not yet been investigated.The second aim was to investigate the functional effects of GEN treatment on tendon explants to establish dependency of these effects on treatment concentration and duration.Ultimately our goal was to determine whether a balance between non-cytotoxicity and functional efficacy of GEN dosing could be achieved, widening the potential range of viable clinical applications for this increasingly used collagen crosslinking agent.All studies were carried out on isolated cells or tissue explants of the superficial digital flexor tendon from the front limbs of freshly slaughtered horses collected from a local abattoir.All experimental factors were generally performed with tissue extracted from the same animal and then replicated using tissue from additional animals.Samples were subjected in a random manner to either sham-treatment or in medium supplemented with genipin at concentrations ranging from 0.02 to 20 mM.Incubation times of 24, 72 and 144 h were investigated.The experiments were conducted starting with a broad approach and progressively focusing on a more limited range of dosages and their effects.First, cell-culture experiments were performed to assess cytotoxicity in terms of relative cell viability and metabolic activity.Using tissue explants, penetration of the crosslinking agent was assessed, homogeneity of crosslink distribution was quantified by inherent fluorescence of GEN crosslinks and the physical effects of the treatments were characterized as changes in denaturation temperature.All these experiments were performed over a wide range of concentrations and treatment times, aiming to identify dosing regimes capable of altering the physical properties of collagen with minimal cytotoxicity.In a second phase, gene expression and cell motility were examined within a reduced range of dosing regimes.Finally, tissue mechanics were characterized for a targeted range of GEN dosing, to identify minimal dosing thresholds able to achieve functionally relevant changes in biomechanical properties.Tissue explants were dissected from the core of the SDFT to a standardized size of approximately 2 × 2 × 2 mm3 under sterile conditions using previously described dissection methods , then incubated in either GEN-supplemented or control medium.For isolated cell-culture experiments, tendon cells were extracted by digestion of explanted tissue using protease type XIV for 2 h at 37 °C and collagenase B solution for 16 h at 37 °C.After the digestion process, the mixture was filtered and centrifuged.The cell pellet was resuspended, seeded at a density of 104 cells cm−2, then cultured at 37 °C and 5% CO2 in expansion medium.Cells used in experiments were either freshly digested or passaged once at subconfluency.A 20 mM stock solution of GEN was freshly prepared in expansion medium for each experiment and then sterile filtered.The stock was then diluted to the required concentrations.For explant cell viability and differential scanning calorimetry, explants were incubated in culture dishes containing GEN-supplemented medium at 37 °C and 5% CO2; for tissue mechanics, explants were incubated in Falcon tubes.Explants used for biochemical analysis and crosslinking distribution were snap frozen after treatment and stored at −80 °C until later use.After GEN treatment, excess treatment solution was removed by blotting the samples on clean cellulose tissue.Superficial formation of blue pigmentation that qualitatively indicates GEN crosslinking was documented using a digital camera under consistent illumination.The same samples were then embedded in paraffin according to standard methods and cut into 6 μm sections.Inherent sample fluorescence: 510–560 nm; emission wavelength: 590 nm) was measured using a fluorescence-equipped upright microscope, since GEN crosslinks have been shown to emit fluorescence, with an exponential correlation to mechanical properties .Cell motility was determined using a standard scratch assay.Briefly, confluent cell monolayers from three different animals were treated in GEN or control medium for 3 days.The monolayers were then scored with a sterile pipette microtip to leave a scratch of approximately 0.5 mm width.The scratch widths were monitored under an inverted microscope over 9 h, with digital images collected at 3 h intervals.The scratch width was then measured.The cell migration speed was calculated at 3, 6 and 9 h after the scratch, and the average cell velocity was determined as the rate of scratch width closure divided by 2 to account for cell movement on each side of scratch.Equally sized tendon explants from five horses were weighed and GEN treated.After 3 days the cells were isolated from the tendon explants using enzymatic digestion as described above and the surviving cells were counted twice in a Neubauer chamber using Trypan blue.Viable cell density was calculated by normalizing the average number of living cells to the initial wet weight of the explants.Explants from five animals were treated over the full range of concentrations and incubation times.After briefly blotting with paper to remove excess moisture, they were wet weighed and placed with the largest flat area onto the bottom of stainless steel pans to guarantee optimal heat transfer.After calibration of the differential scanning calorimeter, the pans were sealed and ramped at a constant heat rate of 10 °C min−1 from 0 to 120 °C using an empty pan as the reference.Denaturation was determined as the temperature at the peak heat flow in the endotherm.As enthalpy has been previously shown to be insensitive to exogenous collagen crosslinking, it was not considered in the analysis .Some pans were opened following DSC measurements, and the samples were dried in an oven for 16 h at 130 °C to determine the dry weight, which was then considered with the wet weight to determine water content .Cells from eight animals were cultured in a reduced range of concentrations for 3 days at 37 °C and 5% CO2.At harvesting, cell cultures were resuspended in RNA-Bee™.RNA was precipitated and reverse transcribed into cDNA using RevertAid™ First Strand cDNA Synthesis Kit.As markers of matrix metabolism, we assayed the genes for collagen type 1 and matrix metalloproteinase 1, as well as the apoptotic marker caspase 3.As an internal control, the housekeeping gene glyceraldehyde 3-phosphate dehydrogenase was used after verification that it was stably expressed across sample conditions.Amplifications were performed in duplicates for the tested genes and quadruple for internal controls using a Power Sybr® green polymerase chain reaction master mix according to standard manufacturer guidelines.Quantitative real-time PCR was performed and analyzed using the device software described method.Averaged gene expressions relative to the housekeeping gene were calculated according to the 2−ΔCT method and used to discriminate between groups .Uniaxial tensile testing was performed with small modifications to previously described protocols, where the power and effectiveness of the employed experimental setup were reported .Due to the large quantities of tissue needed for mechanical tests, tendons were collected and frozen until the day of testing/treatment.At the start of the experiment, tendons were thawed for 1 h at room temperature.During sample processing and tensile testing, dehydration of the samples was avoided by covering them with sterile gauze soaked in phosphate-buffered saline, or by spraying them with PBS such that a liquid film was visible on the samples.All of the sample processing and tensile testing was performed at room temperature.Single strips of tendon were subsectioned into either triplets or pairs, each of approximately 1 × 3 × 50 mm3, using a hand-held device with three microtome blades aligned in parallel .All samples were free from the surrounding connective tissue.Some resident collagen fibers were likely disrupted, leading to lower values of stiffness and strength compared to whole tendon mechanical properties.Strips from each triplet/pair were then pseudo-randomly allocated either to the control or to one of the GEN treatment groups.The precise cross-sectional area of each specimen was assessed by averaging triplicate readings from a CCD-based custom linear laser scanner adapted from the system described by Vergari et al. .On average, cross-sectional area was 3.9 ± 0.7mm2, with no differences between groups, p = 0.952).Samples were then mounted in custom clamps after being wrapped in two saline soaked pieces of cloth to reduce slippage and clamping damage .Samples were preconditioned 10 times up to 6 MPa, corresponding to the end of the heel region and the onset of the linear region of the material curve.Upon preconditioning, the stress–strain curves became congruent within 10 cycles, without any apparent damage.After the 11th cycle to an applied stress of 6 MPa, the sample was held stretched and allowed to relax for 300 s.Relaxation was simply assessed as the relative stress decay over time, and the end value was used for statistical analysis.Displacement measurements were normalized to nominal strain based on the initial length at a tare stress corresponding to 0.05 MPa.After a recovery time again of 300 s, samples were ramped to failure at a constant strain rate of 0.5% L0 s−1, the strain rate used in all tests.The nominal stress was calculated based on initial cross-sectional area before treatment.The elastic mechanical properties were measured as the tangential elastic modulus in the linear part of the stress–strain curve.The region of the 20% highest values of the numerical gradient of the stress–strain curve was used to define the linear range.First-order polynomials were fitted to the linear range, with the slope being the tangential elastic modulus.The linear fit was then shifted by 0.2% strain to determine the yield point as the intersection of the stress–strain curve with the linear fit.Maximal stress and corresponding strain and strain energy densities up to the yield point were calculated numerically .Normalized outcomes from cytotoxicity experiments were analyzed using probit regression, since this model fits the sigmoidal dose–response curves of toxicity experiments well.A probit regression fits dichotomous/binominal dependent variables and is commonly used in toxicology to estimate lethal doses.The sigmoid dose–response data is transformed to become continuous, not limited by 0 and 1, and linear, and can then be analyzed similarly to a linear regression.The probit model was used to estimate toxicity together with 95% confidence intervals of the GEN treatment in terms of relative effects on cells.In case there was a common slope for the regression lines for all treatment times, axes intercepts and relative potencies were used to assess the effect of treatment duration on cells, after having assessed an adequate model fit.The response of the DSC was assessed by analysis of covariance, yielding regression lines for each treatment time.In case there was no significant difference of the interaction term, the model was additionally reduced to a one-way ANOVA to assess significant effects of concentration.Post hoc pairwise comparison was performed using Bonferroni correction.Cell motility and mechanical properties were assessed by two-way ANOVAs for randomized block designs and post hoc pairwise comparison was used to discriminate between treatment group effects using Bonferroni correction or, in some cases, Dunnett’s test to compare test groups against controls.In case the model assumptions were not met, Greenhouse–Geisser adjustment or the multivariate analysis of variance approach was used.Gene expression was assessed by Friedman’s test, the non-parametric alternative to the blocked ANOVA.In all statistical tests, differences were deemed significant with p ⩽ 0.05 and were deemed trends with p ⩽ 0.1.In all cases, two-sided tests were performed.Results are reported as means with standard deviations, if not stated otherwise.All statistics were performed using SPSS v21.0 and/or Matlab R2013a.With increasing GEN concentration and time of incubation, the color of the tendon explants changed from light to dark blue, indicating a reaction of GEN with primary amines.Consistent with the dose- and time-dependent discoloration, homogeneous crosslinking was indicated by uniform fluorescence throughout the cut sections.The probit model indicated that cell viability was clearly concentration dependent and that cell viability was reduced in the 144 h treatment group; in other words, the 144 h GEN treatment was more potent/toxic.This can be seen by a significant leftward shift of the regression line and an increased potency for the 144 h group.While cell rounding was noted at intermediate concentrations, the two highest tested GEN concentrations apparently fixed the cell morphology in an elongated state, which is normal for tenocytes.Fluorescence observed throughout the cells may have indicated GEN crosslinking of cellular and intracellular proteins.Metabolic activity was also clearly affected in a concentration-dependent manner, coupled with a non-significant reduction of metabolic activity in the 144 h treatment group.These effects can be seen as a non-significant leftward shift of the regression line and a non-significantly increased potency of GEN.Cell viability in tendon explants was similarly concentration dependent according to the probit model.However, cell survival was not reduced in the 144 h treatment group, with no significantly shifted curves and no significant reduction in treatment effect.Two-way ANOVA showed a significant effect of cGEN on migration speed, with no variation over time after scratch application.Post hoc pairwise comparison indicated significantly reduced migration speeds in 1 mM and higher cGEN.ANCOVA revealed a common slope of the regression model fitted to the denaturation temperature, which indicated insuffuciency of treatment duration.Therefore, the ANCOVA was reduced to a one-way ANOVA that was dependent on dose only.This showed a significant effect of concentration with increasing denaturation temperature from 5 mM up.Friedman’s test revealed a significant effect of the GEN treatment on collagen type 1 gene expression.An 83% reduction in the relative gene expression was observed in the 1 mM GEN group compared to the 0 mM GEN controls, as was a trend of 23% decreased relative expression in the 0.1 mM GEN group.The cell apoptosis marker caspase 3 and MMP 1 both remained unaffected over the tested range of concentrations.During tensile mechanical testing, one triplet failed during mechanical preconditioning and was therefore excluded from further analysis.Tangent elastic modulus increased significantly, by 34%, after 20 mM GEN treatment, as assessed by the paired t-test.MANOVA for blocked design also showed a significant treatment effect on the elastic modulus.A significant 23% increase was observed in the 5 mM treatment group.A non-significant 14% increased elastic modulus was observed in the 1 mM group compared to controls using post hoc pairwise comparison.Relaxation, measured as relative stress decay over 300 s, was not significantly reduced due to crosslinking.This measure of viscoelasticity was nevertheless larger in controls, and became smaller with increasing concentration.All further parameters assessed by two-way ANOVAs for the concentrations 1 and 5 mM, as well according to t-test on the 20 mM group, remained unaffected.Finally, treatments had no statistical effect on tissue swelling from an initial water content of 75.2 ± 4.8%.Also, initial lengths were very similar between groups.Crosslinking of collagen-based implants has been widely used to augment their strength, elasticity, and resistance to fatigue-induced mechanical damage and premature degradation by the host system .Crosslinking can improve implant survival within challenging mechanical environments, such as the cardiovascular and musculoskeletal systems.Genipin is a plant-derived collagen crosslinking agent that has been demonstrated to be highly mechanically effective yet substantially less cytotoxic than traditional chemical crosslinking agents like glutaraldehyde.Given the efficacy and relatively low cytotoxicity of GEN, it has emerged as a candidate for in vivo application, with various studies demonstrating proof of principle for in situ biomechanical efficacy in treating keratoconus of the eye and ruptures of the intervertebral disc annulus .Our own work has shown it to be capable of arresting damage propagation in an in vitro model of tendon tear, being able to restore normal levels of tissue strains .While GEN seems to offer promise as an in vivo collagen crosslinking agent, it has been reported to be cytotoxic at moderate concentrations, which vary depending on cell type.Until now it has been unclear whether a dense collagen matrix of connective tissues, like that in tendon, could be biomechanically augmented in situ without adverse consequences for the resident cell populations.The present study sought to define ranges of cytotoxicity for tendon cells, and then determine whether functionally relevant changes in tissue mechanics could be attained at these concentrations and exposure times.The overall goal was to better delineate the range of potential in situ applications that GEN may have, and provide dosing guidance for the development of delivery strategies.To provide a basis for effective in vivo dosing, we used probit regression to fit our experimental data.For instance, the model predicts that approximately 50% of cells would remain viable at GEN concentrations of 6.2 mM applied for 72 h, with decreasing cell viability after more prolonged incubation.While many cells remain alive at concentrations of 5 mM or less for the time spans we studied, effects on cell metabolism occurred at substantially lower concentrations.The probit model predicts a 50% drop in metabolic activity at a concentration of 0.4 mM after 72 h, with a trend to additional decreases in cell metabolism after 144 h of treatment.These findings were consistent with reduced cell motility at similar concentrations.While reduced collagen I expression was also consistent with reduced metabolic activity, apoptosis markers and matrix degradation markers were not affected – a favorable finding regarding potential of GEN for in vivo application.Functional, physical effects on the matrix were statistically significant only at the tested concentration above 5 mM, and were independent of time of incubation.We attribute these effects to homogeneous crosslink formation throughout the explants seen by fluorescence.Interestingly, GEN crosslinking of cartilage scaffolds has an increasing effect on mechanical properties with longer treatment , and that effect can start from 2 h on, as measured in collagen gels ; similarly, GEN crosslinking increases the mechanical properties of rat tail tendon within 4.5 h .Therefore, it is not clear whether our lack of time dependence is a result of underpowered experiments, low sensitivity of the tests or actual independence from the time of incubation.Although a main aim of the work was to identify “cell-safe” and “matrix-effective” GEN dosing guidelines, it is clear that these objectives may be mutually exclusive to some extent.It appears that 5 mM are able to induce relatively rapid crosslinking while leaving subpopulations of resident cells viable.Toward clinical application of in situ crosslinking to arrest tendon tear propagation, we believe a dosage of 5 mM or slightly lower with a dosing duration of 72 h would be a reasonable starting point for in vivo study.Although the degree of cytotoxicity that can be tolerated in vivo will vary according to the targeted tissue and clinical indication, properly balancing the need for rapid improvement in tissue function against the long-term consequence of altered cell and matrix metabolism will be imperative.No less important will be the need to develop effective vehicles for targeted crosslink delivery.Delivery of exogenous crosslinks that augment tissues at the intended site without adversely affecting neighboring tissues represents a major challenge and is a thrust of ongoing work in our laboratory.While robust augmentation of elasticity at 5 and 20 mM GEN concentrations is in line with our previous work at 20 mM , the mechanical effects in the present study were less drastic.This can be partly attributed to the slightly reduced sample number used here.Even considering the effects of sham incubation, which yielded trends of increased elastic modulus, ultimate strength and corresponding strain by 10–15% over the native tissue, we remain confident of GEN augmentation effects in the present treatments.When comparing the current studies to other investigations of GEN on tissues and cells, the present observed lack of time-dependent effects on cytotoxicity for culture periods of up to 72 h echoes previous studies on osteoblastic cells .This study reported no observable cytotoxic effects in 0.044 mM GEN solution, while 0.44 mM GEN induced a 50% reduction in metabolic activity of fibroblasts .These findings are in close agreement with our model prediction of 50% reduction in cell metabolism at 0.38 mM.These results somewhat contradict other reports of limited cytotoxic effects on chondrocytes within explants treated up to 42 days at concentrations of 0.22 mM, and this while attaining mechanical enhancement of the tissue .They also stand in contrast with another study reporting little cytotoxicity after crosslinking porcine heart valves in 8 mM GEN while obtaining a near doubling of stiffness .While the current study was intended to be fairly comprehensive, several limitations must be noted.First, focused investigation in narrower windows was deemed to be beyond the scope of this study.Such investigations will be necessary according to the clinical indication and the eventual GEN delivery approach that is utilized – aspects that were not explicitly addressed in the current study.Further, whether the effects we observed prove generally applicable to other dense connective tissues and clinical applications remains to be investigated.Nonetheless, the current study provides a useful baseline for dosage guidance in the design of future in vivo studies, and will be helpful in interpreting their outcome.As a further limitation, we observed slight increases in pH that were attributable to the addition of low GEN concentrations up to 2.5 mM.Since GEN crosslinking, self-polymerization before crosslinking and the crosslink effects on the physical properties of biopolymers have all been shown to be pH dependent , we cannot exclude confounding effects of these pH differences.However, the scales used to assess the effects of pH in those studies were at least tenfold the differences described here.Also, such small differences have only a marginal effect on cell viability and metabolism.Finally, we note that the occasionally large variability within the various performed experiments can be attributed to similarly large variability across animals, and across specimens from a single donor tissue.Inter-specimen variability can be attributed to handling factors like anatomical sampling location, difficulties when cutting the strips along the main collagen direction and challenges in assessing the wet weight of small samples.A further limitation is that the present study focused only on acute and relatively rapid crosslinking effects, neither investigating whether long-term administration of GEN at lower doses could possibly achieve a cumulative functional effect with less pronounced cytotoxicity nor observing the continuity of the documented effects.Finally, on a more basic level, we did not investigate how GEN crosslinks are actually formed or remain stabilized within tissue – factors that may vary depending upon the dosing concentration and time.While this information may be helpful to interpreting our results, we note that others have reported that the stabilizing effects of GEN are comparable to glutaraldehyde .The crosslinking agent genipin is increasingly invoked for the mechanical augmentation of collagen tissues and implants, and has been demonstrated to arrest mechanically driven tissue degeneration.This study established an in vitro dose–response baseline for the effects of genipin treatment on tendon cells and their matrix, with a view to in vivo application for the repair of partial tendon tears.Regression models based on a broad range of experimental data were used to delineate the range of concentrations that are likely to achieve functionally effective crosslinking, and to predict the corresponding degree of cell loss and diminished metabolic activity that can be expected.The data indicate that rapid mechanical augmentation of tissue properties can only be achieved by accepting some degree of cytotoxicity, but that post-treatment cell survival may be adequate to eventually repopulate and stabilize the tissue.On this basis, it can be concluded that development of delivery strategies and subsequent in vivo study is warranted.The authors have no conflicts of interest to declare.
The crosslinking agent genipin is increasingly invoked for the mechanical augmentation of collagen tissues and implants, and has previously been demonstrated to arrest mechanical damage accumulation in various tissues. This study established an in vitro dose-response baseline for the effects of genipin treatment on tendon cells and their matrix, with a view to in vivo application to the repair of partial tendon tears. Regression models based on a broad range of experimental data were used to delineate the range of concentrations that are likely to achieve functionally effective crosslinking, and predict the corresponding degree of cell loss and diminished metabolic activity that can be expected. On these data, it was concluded that rapid mechanical augmentation of tissue properties can only be achieved by accepting some degree of cytotoxicity, yet that post-treatment cell survival may be adequate to eventually repopulate and stabilize the tissue. On this basis, development of delivery strategies and subsequent in vivo study seems warranted. © 2013 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
429
Outcome assessment of emergency laparotomies and associated factors in low resource setting. A case series
Emergency laparotomy is a common procedure which associated with substantial postoperative morbidity and mortality .Compared with other acute surgical emergencies, patients undergoing emergency laparotomy have a disproportionately high mortality both in younger and older sick patients .EP is a resource-intensive surgical procedure with a high morbidity and mortality rates even in the best healthcare systems and remain an area of focus for quality improvement in developed nations .Perioperative management of patients undergoing emergency laparotomy in middle and low-income countries is extremely challenging, and causes high postoperative 30-day patient morbidity and mortality as well as imposes a high healthcare cost burden .Despite this, there is paucity of evidence on postoperative patient morbidity and mortality after emergency laparotomy in resource-limited settings which hamper the establishment of evidence-based optimal perioperative care bundle .In addition, in low-income countries, there are large volumes of emergency patients who need surgical care.However, infrastructures such as operation rooms, advanced equipment, skilled human resources, investigation modalities such as Computerized tomography scan, Magnetic Resonance Imaging, Ultrasound and drugs are limited .Moreover, even with the available resources, there are variations in the preoperative patient optimization, surgical/anaesthetic quality care provision and utilization of the available resources all of which could negatively impact on postoperative patient outcome .In this study, we characterized the heterogeneity of patients presented with acute abdomen, underlined pathologies, delay from the onset of symptoms of the diseases to hospital admission and surgical interventions, types of surgical interventions performed, and postoperative morbidity and mortality within 30 days of emergency laparotomy in a tertiary teaching and referral governmental hospital with a high load of emergency patients with limited resources for patient care.No conflicts of interest to declare.Research Registration Unique Identifying Number: researchregistry3317,Ethical approval was obtained from College of Medicine and Health Sciences, Academic, Research and Community Services Vice Dean.This study was also registered in researchregistry.com.Oral informed consent was obtained from each study subject after explanation of what they will take part in the research and any involvement was after their complete consent.Anyone who was not willing to participate in the study had full right not to participate.Confidentiality was ensured from all the data collectors and investigators using anonymous questionnaire and keeping questionnaires locked.This work has been reported in line with the PROCSS criteria .This is a single centre prospective observational study.All consecutive patients who underwent emergency laparotomy during the study period were included.This is one of the largest governmental tertiary Teaching and Referral hospitals in the country which provides health services for more than five million people in the catchment area.The hospital has 500 hundred beds, seven operation theatres, one medical and one paediatrics intensive care units.The study conducted from March 11-June 30, 2015.Data was collected using Emergency Laparotomy Network tool.A pre-tested, structured, English version questionnaire and checklist used to collect the data.The English version questionnaire was pre-tested before actual data collection.One BSc holder data collector was selected and one day training was given to complete data collection.Training of data the collector and pre testing activities were took place from February 15–30, 2015.To ensure the quality of data, training was given for data collectors and the investigators have been directing and monitor the whole data collection processes for consistency, completeness and accuracy.Pre-test was done; data cleaned and checked every day, and double data entry technique used during data entry.All consecutive patients who underwent emergency laparotomies in our hospital during the study period were included.Whereas cholecystitis or internal hernia after gastric bypass which in the local setting are treated as a semi-acute setup, laparotomies for non-planned reoperations after recent surgical procedures and primary acute laparotomies in patients operated more than 24 h post admission were excluded from the study.The main outcomes of interest were postoperative complication, and death.The sociodemographic variables were age, sex, body mass index, American Anesthesiologists’ status, preoperative complication, preoperative co-morbidity, surgical indication, seniority of anaesthetist and surgeon, length of hospital stay, perioperative temperature, time of patient admission.In addition, anaesthesia related factors also include: type of anaesthesia: General anaesthesia vs regional anaesthesia, anaesthetic related complication, premedication.Moreover, operation related factor comprised of indication for surgery, type of operation, extent of operation, risk of operation, duration of surgery, specific type of operation, timing of surgery.Furthermore, place of postoperative patient follow up, postoperative complications and postoperative death were assessed.Emergency: Immediate lifesaving operation, resuscitation simultaneous with surgical treatment.Emergency laparotomy: Emergency operation which involves exploration of the abdomen.Postoperative mortality: Defined as death within 30 days after primary emergency laparotomy.Postoperative morbidity: Defined as operation related complications that occurred within 30 days after operation.Major operation: Defined as any invasive operative procedure in which a more extensive resection is performed, e.g. a body cavity is entered, organs are removed, or normal anatomy is altered-in general, if a mesenchymal barrier was opened.Minor operation: A minor operation was defined as any invasive operative procedure in which only skin or mucus membranes and connective tissue are resected, e.g. vascular cut-down for catheter placement or implanting pumps in subcutaneous tissue.The data coded, entered and analyzed using SPSS version 20 software.Associations between dependent and independent variables were assessed and its strength was presented using adjusted odds ratios and 95% confidence interval.Binary and multiple logistic regressions were used to assess the association between outcome and explanatory variables.Variables from the bivariate analysis were fitted for the two outcome variables in relation to each explanatory variable.Those variables which will fulfil the minimum requirement of 0.2 level of significance were further entered in to multivariate logistic regression analysis for further assessment and the fitness of model the was checked using Hosmer and Lemeshow goodness of fitness.Frequency tables, graphs and summary statistics were used.A total of 260 patients were included in the study with a response rate of 100%.Of the study participants, 167 were males."The majority of patients were American Society of Anesthesiologists' Physical Status three whereas ASA2, ASA4, and ASA1 respectively.Thirty three out of 260 patients had preoperative associated co-morbidities .None of the patients had CT scanning before surgery as CT was not available in the hospital during the study period.The majority of patients were operated upon under general anaesthesia with endotracheal intubation whereas 12 were operated upon under combined general and regional anaesthesia respectively.Two hundred and twenty four patients were induced with ketamine whereas 16, 7, and 1 of patients were induced with thiopentone, propofol and halothane respectively.Suxamethonium was used for intubation for the majority of patients followed by pancuronium 5 and vecuronium 2 respectively.Two hundred and forty patients were maintained with halothane during operation whereas 7, 1 and 12 patients were maintained with intravenous drugs, combined intravenous and inhalational anaesthetics, and preoperatively instituted regional anaesthesia such as epidural anaesthesia respectively.Most patients were monitored with pulseoximetry, non-invasive blood pressure apparatus and ECG during operation.There was no capnograph during the study period .The majority of patients were given 2 L of fluid during operation whereas 29, 76, 29, 5, and 2 patients were given < 1 L, 1 L, 3 L, 4 L and 5 L respectively with the mean value of 1.6 L. Three patients were not given fluid intraoperatively.One hundred and sixty one out of 260 patients had undergone abdominal operation followed by appendectomy .Most patients had also late surgical intervention after hospital admission according to the definition of the International Society of Emergency Laparotomy Network which is claimed to be attributing to the poor postoperative patient outcome .Most patients were given antibiotics prophylaxis before operation.But only one out of 260 patients was given thromboembolic prophylaxis before operation.The majority of patients were operated during the night time.There was no the use of WHO or equivalent surgical safety checklist during the study period.The maximum, minimum and mead duration of operation were 360, 25 and 68.89 min respectively.The main surgical indications and type of operations performed are summarized below.Most patients passed through the recovery room after operation.Only two patients directly transferred from the operation theatre to the ward and/or ICU .The majority of patients were managed in the surgical ward 103, trauma unit 98, orthopedics 38, paediatrics 19 and other 2 respectively.The anaesthetists involved in the postoperative patient management in 97 patients.The minimum and maximum duration of the total length of hospital stay after operation was 1 and 30 days respectively with the median value of 6.0 ± 4.68 days.The overall incidence of postoperative morbidity was 39.2% within 30 days of operation.Twenty six out of 260 patients were re-admitted from the wards to the recovery room after operation.Surgical re-intervention after operation was done for 14 patients.Of these, 11 under general anaesthetics, 2 under local anaesthetics and 1 endoscopic interventions were done.The most common postoperative morbidity was vital sign derangement among patients who underwent emergency laparotomy with diagnosis of peritonitis, penetrating trauma, small bowel obstruction, gastric perforation, intussusception, abdominal abscess, perforated gastric ulcer, gangrenous bowel, ischemic bowel and large bowel obstruction respectively.In addition, pneumonia occurred in patients with penetrating trauma, abdominal abscess, gastric ulcer, blunt trauma and negative laparotomy respectively.Patients who developed wound infection were intussusception, gangrenous sigmoid volvulus, gangrenous right sigmoid colon and blunt trauma respectively .The overall incidence of postoperative mortality was 3.5%.Of these, 3, 4 and 2 patients dead within 24 h, within 72 h and within 30 days after operation respectively.The variables with a p-value of <0.05 from the bivariate analysis but had no association with postoperative mortality from the multivariate analysis were age, sex, ASA status, co-morbidity, V/S at admission, preop analgesia, type of anaesthesia, intraoperative analgesia, type of muscle relaxant, V/S during recovery phase, time from admission to operation, type of operation, prophylactic antibiotics, use of intraoperative warming, and perioperative blood transfusion."Preoperative anaesthetists' opinion has positive association with postoperative mortality after laparatomy .This study revealed that the overall incidence of postoperative morbidity and mortality were 39.2% and 3.5% within 30 days of operation respectively.This finding was high compared with a study conducted in Pakistan where the incidence of postoperative complication was 33.7%.This discrepancy could be due to better perioperative care of patients in Pakistan compared to our setup .However, our finding was low compared with a study conducted in India which could attribute to the difference in the quality of perioperative patient care.The factors that had strong association with postoperative morbidity were presence of preoperative co-morbidity, and bowel resection .The presence of co-morbidities and extensive operations like bowel resection where patients mostly develop bowel ischemia/gangrene are well known factors contributing for postoperative complications after emergency laparotomy .In addition, in the current study, the level of consciousness at the end of anaesthesia and any 30 day surgical re-intervention had positive association with postoperative mortality.Optimal perioperative patient care and early interventions could reduce postoperative patient mortality .Concerning postoperative morbidity, the commonest postoperative complications were vital sign derangement, hospital acquired pneumonia, PONV, wound infection, intra-abdominal abscess, fever and anastomotic leak respectively.The incidences of pneumonia and wound infection were low in our study compared with a previous study which might attribute to the quality of perioperative surgical and anaesthetic care provision .Moreover, late presentation of the patients to the hospital and delay surgical intervention after admission to the hospital contributes greatly for perioperative patient morbidity and mortality .In the current study, the majority of patients had late presentation to the hospital after the onset of symptoms of the diseases and late surgical intervention after hospital admission respectively according to the definition of the International Society of Emergency Laparatomy Network .This finding was comparable with a previous study .The late presentation might be due to the fact that most of our patients came from rural areas and there were also large emergency case-loads to the hospital which could attribute to late surgical interventions.Most patients passed through the recovery room after operation.Only two patients directly transferred from the operation theatre to the ward and/or ICU.Moreover, there was no surgical ICU which could contribute for postoperative adverse outcomes as failure to admit patients to the appropriate level of care immediately after emergency laparatomy is the main cause for morbidity and mortality .Furthermore, there was no the use of WHO or equivalent surgical safety checklist during the study period.The establishment of high dependency unit and the use of WHO or equivalent surgical safety checklist during operation may improve postoperative patient outcome after such high risk operations .It is also agreed that high risk operations, emergency laparotomy, should be specialist surgeons and anaesthetists lead .However, in this study, consultant surgeons and anaesthetist were involved only in the few numbers of patients during operation .This is an observational study where practice variations among caregivers during the perioperative course of the patient care could affect the study outcomes.In addition, lack of use of WHO or equivalent surgical safety checklist and surgical ICU could negatively impact on the postoperative patient morbidity and mortality after emergency abdominal surgery.This is the first study on postoperative patient outcome after emergency laparotomy in the hosting hospital and country which could provide an insight about the significance of the existed problem and the need for developing perioperative patient care bundle.The incidence of postoperative morbidity and mortality were high in our University tertiary teaching and referral hospital.Preoperative co-morbidity and bowel resection were determinant factors for postoperative morbidity whereas the level of consciousness during recovery from anaesthesia, and any re-intervention within 30 days after primary laparotomy operation were contributing factors for postoperative patient mortality.Preoperative optimization, early surgical intervention, and consultant-surgeon/anaesthetist lead perioperative care for these high risk surgical patients could improve postoperative outcome.In addition, WHO or equivalent centre based surgical safety checklist during operation and establishment of high dependency unit should be emphasized.Moreover, investigation modalities like CT scan need to be established in the hospital to improve the quality of preoperative diagnosis and perioperative surgical patient care.Furthermore, perioperative patient care bundle/protocol should be introduced in the hospital to improve patient safety.It will be also paramount conducting the same study in large cohorts of patients in similar settings in the country.Not commissioned, externally peer reviewed.Ethical approval was obtained from University of Gondar, College of Medicine and Health Sciences, Academic, Research and Community Services Vice Dean.Please see the attached ethical clearance file.This study was supported by University of Gondar.This grant had no influence on the conduct of study and manuscript preparation.Endale Gebreegziabher Gebremedhn, Abatneh Feleke Agegnehu and Bernard Bradley Anderson conceived the study, developed the proposal, collected data, analyzed data, prepared the manuscript, approved the final manuscript and agreed to publish in International Journal of Surgery.
Background: Emergency laparotomy is a high risk procedure which is demonstrated by high morbidity and mortality. However, the problem is tremendous in resource limited settings and there is limited data on patient outcome. We aimed to assess postoperative patient outcome after emergency laparotomy and associated factors. Methods: An observational study was conducted in our hospital from March 11- June 30, 2015 using emergency laparotomy network tool. All consecutive surgical patients who underwent emergency laparotomy were included. Binary and multiple logistic regressions were employed using adjusted odds ratios and 95% CI, and P-value < 0.05 was considered to be statistically significant. Result: A total of 260 patients were included in the study. The majority of patients had late presentation (>6hrs) to the hospital after the onset of symptoms of the diseases and surgical intervention after hospital admission. The incidences of postoperative morbidity and mortality were 39.2% and 3.5% respectively. Factors associated with postoperative morbidity were preoperative co-morbidity (AOR = 0.383, CI = 0.156–0.939) and bowel resection (AOR = 0.232, CI = 0.091–0.591). Factors associated with postoperative mortality were anesthetists' preoperative opinion on postoperative patient outcome (AOR = 0.067, CI = 0.008–0.564), level of consciousness during recovery from anaesthesia (AOR = 0.114, CI = 0.021–10.628) and any re-intervention within 30 days after primary operation (AOR = 0.083, CI = 0.009–0.750). Conclusion and recommendation: The incidence of postoperative morbidity and mortality after emergency laparotomy were high. We recommend preoperative optimization, early surgical intervention, and involvement of senior professionals during operation in these risky surgical patients. Also, we recommend the use of WHO or equivalent Surgical Safety Checklist and establishment of perioperative patient care bundle including surgical ICU and radiology investigation modalities such as CT scan.
430
Tobacco use among people living with HIV: analysis of data from Demographic and Health Surveys from 28 low-income and middle-income countries
The advent of and increased access to antiretroviral therapy has transformed HIV from a deadly disease to a chronic condition for many people living with HIV.1,With ART, people living with HIV can now have a near-normal life expectancy.1,However, unhealthy behaviours such as tobacco use threaten to undermine some of the gains that have been made.2,Smoking increases the risk of death among people living with HIV.3,4,A study3 among 924 HIV-positive women on ART in the USA reported an increased risk of death due to smoking with a hazard ratio of 1·53.A prospective cohort4 of 17 995 HIV-positive individuals from Europe and North America receiving ART found a mortality rate ratio of 1·94 for smokers when compared with non–smokers.The average years of life lost by HIV-positive smokers compared with HIV-positive non-smokers have been estimated as 12·3 years, which is more than twice the number of years lost by HIV infection alone.5,People living with HIV are more susceptible to tobacco-related illnesses such as cardiovascular disease, cancer, and pulmonary disease when compared with those who are HIV-negative or with the general population.6–8,Furthermore, smoking among people living with HIV increases susceptibility to infections such as bacterial pneumonia, oral candidiasis, and tuberculosis.9–11,A case-control study12 among 279 ART-naive HIV-positive men in South Africa found that current smoking tripled the risk of pulmonary tuberculosis.Smoking also increases the risk of developing AIDS among people living with HIV.3,This increased susceptibility has been mainly attributed to biochemical mechanisms including the immunosuppressive effects of smoking and its negative impact on immune and virological response even when on ART.13,Behavioural mechanisms have also been suggested—for example, an association between smoking and non-adherence to antiretroviral therapy.14,15,Few estimates are available of population-level prevalence of tobacco use among people living with HIV from low-income and middle-income countries, where the burden of HIV and tobacco-related illnesses is greatest.16–18,Our study aimed to address this evidence gap for all forms of tobacco use among people living with HIV, using data from 28 nationally representative household surveys.Our study also compared tobacco use prevalence among people living with HIV to that among HIV-negative individuals.Evidence before this study,We searched MEDLINE for articles published from inception until Dec 31, 2015, that included the terms “smoking” or “tobacco use” or “smokeless” or “cigarette” AND “HIV” or “human immunodeficiency virus” or “AIDS” or “Acquired Immune Deficiency Syndrome” in the title and were from low-income and middle-income countries.We updated this search on Sept 30, 2016.We identified six primary research articles on the prevalence of tobacco smoking among people living with HIV that covered eight countries, of which only one article was based on national-level Demographic and Health Survey data from one country.We did not identify any multi-LMIC comparisons on the topic.Added value of this study,Our study is the largest to our knowledge to report country-level and overall prevalence estimates for tobacco smoking, smokeless tobacco use, and any tobacco use among people living with HIV from 28 LMICs using nationally representative data that is comparable across countries.Our study is also the first to compare the country-level prevalence estimates for tobacco use for people living with HIV with those for HIV-negative individuals in the respective countries where such data are available.Implications of all the available evidence,Findings from our study and all other identified studies confirm that for LMICs, the prevalence of tobacco use is higher among people living with HIV than among those without HIV, for both men and women.Policy, practice, and research action on tobacco cessation among people living with HIV is urgently needed to prevent the excess morbidity and mortality due to tobacco-related diseases and to improve the health outcomes in this population."This action could include exploring effective and cost-effective tobacco cessation interventions for people living with HIV that are appropriate and scalable in low-resource settings; the integration of tobacco use services within HIV programmes in LMICs including proactive identification and recording of tobacco use, as well as the provision of tobacco use cessation interventions; increasing health-care providers' awareness and competencies on provision of tobacco use cessation services among people living with HIV; increasing awareness of the harms due to tobacco use and the benefits of quitting among people living with HIV; and implementing smoke-free policies within HIV services.We did a secondary analysis of the most recent Demographic and Health Survey data from 28 LMICs where both tobacco use and HIV test data were made publicly available.Access to and use of this data was authorised by the DHS programme.The DHS is designed to collect cross-sectional data that are nationally representative of the health and welfare of women of reproductive age, their children, and their households at about 5-year intervals across many LMICs.The DHS procedure including the two-staged sampling approach for the selection of census enumeration areas and households, questionnaire validation, data collection for household, men, and women, and data validation are comprehensively described elsewhere.19, "In all selected households, women aged 15–49 years are eligible to participate, and those who give consent are interviewed using a women's questionnaire. "In many surveys, men aged 15–54 years from a subsample of the main survey households are also eligible to participate, and those who give consent are interviewed using a men's questionnaire.The surveys are comparable across countries through the use of standard model questionnaires and sampling methods.In the DHS, tobacco use is ascertained by three questions to be answered “yes” or “no”.Two are about whether the respondent currently smokes cigarettes or uses any other type of tobacco.The third asks the respondent what types of tobacco they currently smoke or use, for which all tobacco types are recorded including country-specific products.20,HIV status data are obtained from a HIV testing protocol that undergoes a host country ethical review and provides status from informed, anonymous, and voluntary HIV testing for both women and men.Blood spots are collected on filter paper from a finger prick and transported to a laboratory for testing.The testing involves an initial ELISA test, and then retesting of all positive tests and 5–10% of the negative tests with a second ELISA.For those with discordant results on the two ELISA tests, a new ELISA or a western blot is done.We analysed country-level DHS data to calculate prevalence estimates for tobacco use in people living with HIV.We also computed relative prevalence ratios comparing prevalence between HIV-positive and HIV-negative individuals.For each country for which HIV status data could be linked to individual records in the general survey, we included data on HIV-positive and HIV-negative individuals in the analysis.We classified respondents as “tobacco smoker” if they responded “yes” to smoking cigarettes, pipes, or other country-specific smoking products such as water pipe or hookah; as “smokeless tobacco user” if their response was “yes” to the use of chew, snuff, or other country-specific smokeless tobacco products; and as “any tobacco user” if they indicated that they smoked tobacco, used smokeless tobacco, or both.These categories were not mutually exclusive, and respondents could be classified as all three.For each country, we estimated the crude prevalence of tobacco smoking, smokeless tobacco use, and any tobacco use for males and females.We used STATA version 14 for our analyses.The analysis included sampling weights to account for differential probabilities of selection and participation, and also accounted for clustering and stratification in the sampling design.21,Within STATA version 14, we declared the DHS datasets as survey type from two-stage cluster sampling: the selection of census enumeration areas based on a probability and random selection of households from a complete listing of households within the selected enumeration areas.19,Reported estimates include the associated 95% CIs.We also computed age-standardised prevalence rates for men and women separately, using the WHO World Standard Population Distribution based on world average population between 2000 and 2025.22,Countries were classified geographically into the following six WHO regions: Africa, Americas, eastern Mediterranean, Europe, southeast Asia, and western Pacific.We computed pooled regional estimates for the African region only, which had a sufficient number of countries with available data for a meta-analysis.We computed these estimates in MetaXL version 5.3 by first stabilisation of the variances of the raw proportions with a double arcsine transformation and then application of a random-effects model.23,We assumed that the country-level estimates were different, yet related.With a random-effects meta-analysis, the SEs of the country-specific estimates are adjusted to incorporate a measure of the extent of variation among them.24,The amount of variation, and hence the adjustment, can be computed from the estimates and SEs from the country-level data included in the meta-analysis.24,We computed overall pooled prevalence estimates for all countries combined together using the same methodology as for the regional-level estimates.To study differences in prevalence rates between HIV-positive and HIV-negative individuals, we estimated country, regional, and overall relative prevalence ratios for tobacco smoking, smokeless tobacco use, and any tobacco use separately for males and females.We used RevMan version 5.1 for this analysis using the random-effects model, as described above.We used the I2 statistic to assess heterogeneity between country-specific estimates: values of 25% or less indicated low heterogeneity, values greater than 25% but less than 75% indicated moderate heterogeneity, and values of 75% or greater indicated high heterogeneity.We explored potential sources of heterogeneity for prevalence estimates through meta-regression analysis.This analysis tested the association between the country-level covariates, as well as year of survey, with estimated prevalence of any tobacco use.We used STATA version 14 for meta-regression.The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report.The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.We were able to link HIV status data with the individual data in the general DHS survey for 28 LMICs, with data for men available in all LMICs apart from Cambodia.For these, our analysis provided data for 6729 HIV-positive men and 11 495 HIV-positive women, as well as 193 763 HIV-negative men and 222 808 HIV-negative women for the same age groups.Of all LMICs, we were able to include 24 of 46 in Africa, two of 26 in the Americas, one of 11 in southeast Asia, and one of 18 in the western Pacific regions.Data were not available for any of the LMICs in Europe and the eastern Mediterranean.Table 1 shows the country-level characteristics of HIV-positive and HIV-negative men and women whose data were included in the analysis.Of HIV-positive men from the 27 LMICs, 27·1% reported any tobacco use.The crude prevalence ranged from 9·7% to 68·3%.The regional pooled prevalence for Africa of any tobacco use was 26·0%.Overall, 24·4% of HIV-positive men reported smoking tobacco.We found a substantial variation in crude prevalence of current tobacco smoking across countries, from 9·7% to 54·8%.The regional pooled prevalence for Africa of current tobacco smoking in HIV-positive men was 24·2%.Overall, 3·4% of HIV-positive men reported use of smokeless tobacco.The country-level crude prevalence varied considerably and was as high as 41·4% in India.The regional pooled prevalence for Africa of current smokeless tobacco use was 2·6%.When compared with HIV-negative men, the pooled prevalence among HIV-positive men was significantly higher for any tobacco use and for smoking.The difference between the two groups on smokeless tobacco use prevalence did not reach significance.The pooled prevalences for the African region were significantly higher for HIV-positive men than for HIV-negative men for any tobacco use, tobacco smoking, and smokeless tobacco use.Of HIV-positive women from the 28 LMICs, 3·6% reported any tobacco use.Lesotho had the highest crude prevalence at 16·4%.The regional pooled prevalence for Africa of any tobacco use was 2·8%.Overall, 1·3% of HIV-positive women reported smoking tobacco.The Dominican Republic had the highest crude prevalence at 10·1%.The regional pooled prevalence for Africa of current tobacco smoking in HIV-positive women was 1·0%.Overall, 2·1% of HIV-positive women reported smokeless tobacco use.Lesotho had the highest crude prevalence at 15·7%.The regional pooled prevalence for Africa of current smokeless tobacco use was 1·9%.When compared with HIV-negative women, the pooled prevalences among HIV-positive women were significantly higher for any tobacco use, current smoking, and smokeless tobacco use.The pooled prevalences for the African region were significantly higher for HIV-positive women than for HIV-negative women for any tobacco use, tobacco smoking, and smokeless tobacco use.We found substantial heterogeneity in prevalence estimates between countries within and across regions.However, meta-regression did not reveal any significant predictor variable for either men or women.This study is the largest to our knowledge to provide up-to-date, country-specific, regional and overall prevalence estimates for tobacco smoking, smokeless tobacco use, and any tobacco use among people living with HIV using nationally representative samples from 28 LMICs.Our study shows that tobacco use prevalence in LMICs is generally higher for people living with HIV than for HIV-negative individuals, both men and women.Other studies have reported a higher prevalence of smoking in people living with HIV than in HIV-negative individuals or in the general population.For example, in the USA, the reported prevalence of smoking among people living with HIV is 50–70% compared with about 20% in the general population or HIV-negative individuals.3,9,25–31,These differences are much higher than the differences we observed for tobacco use in our study, and are observed for both men and women.A possible explanation is that most of the developed country studies were done in non-random samples of population subgroups, such as those in a particular treatment programme or low-income groups.The differences in study results could also be attributed to differences in the profile of HIV-positive populations in developed countries compared with LMICs.Very few studies have examined tobacco use prevalence among people living with HIV in LMICs; most of these studies are also of small non-random subsamples and only present data on tobacco smoking.32–35,Few of these studies make comparisons with the general population prevalence or that among HIV-negative individuals.Those studies that have used larger samples or population-level data have reported prevalence estimates that are closer to our findings.A cross-sectional survey36 in Zimbabwe among 6111 factory workers—88% of whom were men—found a smoking prevalence of 27% among HIV-positive individuals versus 17% among HIV-negative individuals.Later, Lall and colleagues37 reported tobacco use prevalence in India of 68% among HIV-positive men versus 58% among HIV-negative men from a secondary data analysis of 50 079 observations from the 2006 general-population National Family Health Survey data.We found substantial variation in tobacco smoking, smokeless tobacco use, and any tobacco use prevalence between countries.Tobacco smoking was much more prevalent among men than among women—an observation that is consistent with findings from elsewhere.38,For women, smokeless tobacco was the primary form of tobacco use in 11 of the 28 countries.This finding has also been noted elsewhere,21 and potential reasons include that, in some countries, smokeless tobacco use is more socially acceptable than tobacco smoking among women.39,A potential for misclassification also exists, particularly owing to under-reporting in contexts in which tobacco smoking, or any tobacco use, is not socially or culturally acceptable, particularly for women.35,40,The gender differences in product preferences need to be accounted for when designing policies and interventions for tobacco use for people living with HIV.Additionally, although the prevalence of smoking is very low overall among women living with HIV, women can be exposed to second-hand smoke, particularly if their partners smoke, and the prevalence of this exposure and associated harms needs investigation.A recent analysis38 of DHS data from 19 sub-Saharan Africa countries also identified other factors that increase the risk of being a smoker among people living with HIV in addition to gender, including being from poorer households and living in urban areas.Possible reasons for high tobacco use among people living with HIV have included tobacco being used to cope with HIV-related symptoms such as neuropathic pain, as well as with anxiety, stress, and depression, all of which are higher in this population.30,41,Studies have found that people living with HIV express an inaccurate perception of their life expectancy that affects their perceived susceptibility to the risks of tobacco use.42,Additionally, a study43 among 301 HIV-positive individuals in Mali found that, although knowledge of their HIV infection did not lead individuals to take up smoking, it negatively influenced those already smoking by increasing their consumption.These factors taken together suggest that dissemination of information on the harms of tobacco use might not be enough to reduce tobacco use in this population.The benefits of quitting also need to be emphasised either through use of targeted graphic health labels or use of culturally-appropriate mass communication media, as provided for in the guidelines for the implementation of tobacco control policies compliant with the WHO Framework Convention on Tobacco Control.People living with HIV who use tobacco find it hard to quit and often recommence use after they have stopped.25,28,31,Pool and colleagues44 have published a systematic review including 14 studies on effectiveness of smoking cessation interventions among people living with HIV, most of which combined pharmacotherapy with nicotine replacement therapy or varenicline and psychotherapy.They found some poor-quality evidence of effectiveness in the short term but no clear evidence of effectiveness in the long term.44,Many interventions included in these studies were not tailored to the unique needs of people living with HIV, which was suggested as a potential reason for the poor success rates of these interventions.16,HIV-positive individuals face social stigma, mental and physical comorbidities, alcohol misuse, and co-dependencies on other substances, all of which influence their tobacco use behaviour, quit attempts, and successful cessation rates.Future interventions should take account of these complex social, psychological, and other health challenges faced by most people living with HIV.In DHS data, current tobacco use status are ascertained by self-report, which raises possibilities of under-reporting, as mentioned previously.35,40,Smokeless tobacco was treated as a homogeneous group in our study when in fact it is a surrogate term for a diverse range of tobacco products including snuff and chewing tobacco.Additionally, countries differed in the way data on smokeless tobacco use was collected and recorded, which meant that making clear distinctions between different smokeless tobacco products proved even more difficult.Our study comprised observations from individuals who had agreed to have an HIV test as part of the DHS, and had available test results that could be linked to their tobacco use data in the general DHS dataset.This condition meant that our sample was restricted to a self-selected population and might not represent the general population—a limitation of the DHS data on HIV and not just of our analysis.Furthermore, the samples analysed for some countries were relatively small, particularly for people living with HIV.Although unlikely, a potential for selection bias still exists when individuals included in the sample were substantially different from those who were not, with respect to their smoking status.Evidence suggests that the prevalence of tobacco use could be growing in some LMICs, especially in females.21,Our study included data from 2003 to 2014, and therefore our prevalence estimates could be lower than the current status.Our analysis was limited to a few WHO regions, and even within these regions—with the exception of Africa—we could only include a few countries owing to unavailability of DHS data.Other countries had tobacco use data but did not have linked HIV data or did not make it publicly available, which restricted our ability to compute wider regional and global prevalence estimates.Tobacco use leads to substantial morbidity and mortality among HIV-positive individuals.45,Countries with a high prevalence of tobacco use among HIV-positive populations, as highlighted by our study, should prioritise introduction of tobacco cessation in their HIV treatment plans.However, most HIV care providers are less likely to correctly identify current smokers and feel confident in their ability to influence smoking cessation than are general health workers.46, "Given the overwhelming task of managing HIV infection and its complications, tobacco cessation might also be less of a priority from both providers' and patients' perspectives.25,42,47",Future research action to improve the health of this population could therefore include exploring effective and cost-effective tobacco cessation interventions for people living with HIV that are sustainable and scalable in low-resource settings."For policy and practice, action could include the integration of tobacco cessation within HIV programmes in LMICs including proactive identification and recording of tobacco use, as well as the provision of tobacco cessation interventions; increasing health-care providers' awareness and skills in providing cessation advice to people living with HIV; increasing awareness of the harms due to tobacco use and the benefits of quitting among people living with HIV; and implementation of smoke-free policies within HIV-treatment facilities.
Background Tobacco use among people living with HIV results in excess morbidity and mortality. However, very little is known about the extent of tobacco use among people living with HIV in low-income and middle-income countries (LMICs). We assessed the prevalence of tobacco use among people living with HIV in LMICs. Methods We used Demographic and Health Survey data collected between 2003 and 2014 from 28 LMICs where both tobacco use and HIV test data were made publicly available. We estimated the country-specific, regional, and overall prevalence of current tobacco use (smoked, smokeless, and any tobacco use) among 6729 HIV-positive men from 27 LMICs (aged 15–59 years) and 11 495 HIV-positive women from 28 LMICs (aged 15–49 years), and compared them with those in 193 763 HIV-negative men and 222 808 HIV-negative women, respectively. We estimated prevalence separately for males and females as a proportion, and the analysis accounted for sampling weights, clustering, and stratification in the sampling design. We computed pooled regional and overall prevalence estimates through meta-analysis with the application of a random-effects model. We computed country, regional, and overall relative prevalence ratios for tobacco smoking, smokeless tobacco use, and any tobacco use separately for males and females to study differences in prevalence rates between HIV-positive and HIV-negative individuals. Findings The overall prevalence among HIV-positive men was 24.4% (95% CI 21.1–27.8) for tobacco smoking, 3.4% (1.8–5.6) for smokeless tobacco use, and 27.1% (22.8–31.7) for any tobacco use. We found a higher prevalence in HIV-positive men of any tobacco use (risk ratio [RR] 1.41 [95% CI 1.26–1.57]) and tobacco smoking (1.46 [1.30–1.65]) than in HIV-negative men (both p<0.0001). The difference in smokeless tobacco use prevalence between HIV-positive and HIV-negative men was not significant (1.26 [1.00–1.58]; p=0.050). The overall prevalence among HIV-positive women was 1.3% (95% CI 0.8–1.9) for tobacco smoking, 2.1% (1.1–3.4) for smokeless tobacco use, and 3.6% (95% CI 2.3–5.2) for any tobacco use. We found a higher prevalence in HIV-positive women of any tobacco use (RR 1.36 [95% CI 1.10–1.69]; p=0.0050), tobacco smoking (1.90 [1.38–2.62]; p<0.0001), and smokeless tobacco use (1.32 [1.03–1.69]; p=0.030) than in HIV-negative women. Interpretation The high prevalence of tobacco use in people living with HIV in LMICs mandates targeted policy, practice, and research action to promote tobacco cessation and to improve the health outcomes in this population. Funding South African Medical Research Council and the UK Medical Research Council.
431
A review and reinterpretation of the architecture of the South and South-Central Scandinavian Caledonides—A magma-poor to magma-rich transition and the significance of the reactivation of rift inherited structures
The present architecture of the Scandinavian Caledonides is principally the result of the Silurian–Devonian Scandian continental collision of Baltica-Avalonia with Laurentia, the subsequent late- to post-orogenic extension, and deep erosion.During the Scandian collision and in parts during the early-Caledonian events affecting the distal margin of Baltica, the rifted continental margin of Baltica was deeply buried beneath Laurentia and a complex stack of nappes was thrust over great distances towards the south-east onto Baltica.The underlying autochthon comprises Archean to Palaeoproterozoic basement in the north and Mesoproterozoic basement in the south that is covered byautochthonous metasediments of Neoproterozoic to latest Silurian age.The nappe-stack comprises allochthons of Baltican, transitional oceanic-continental, oceanic, and Laurentian affinity.The allochthons of Baltican affinity include Neoproterozoic pre- to post-rift successions as well as post-rift continental margin deposits of Cambrian to Silurian age and foreland basin sediments deposited in front of and incorporated into the advancing thrust sheets during the Scandian Orogeny.Baltican-derived basement and basement-cover nappes are commonly referred to as the Lower and Middle Allochthons and are interpreted to contain transgressive sequences deposited along the Iapetus margin of Baltica.Ophiolite/island arc assemblages and nappes of Laurentian affinity are commonly referred to as the Upper and Uppermost Allochthons, respectively.In the traditional tectonostratigraphic scheme, all units with ocean-floor-like lithologies are referred to as ophiolites or dismembered ophiolites and are interpreted to have initially formed in an ocean, outboard of all rocks with continental affinity.The traditional interpretations assume that a mostly uniform and continuous tectonostratigraphy with the same palaeogeographic significance can be traced along the entire length of the Scandinavian Caledonides.However, present-day understanding of continental margins and their remnants within mountain belts is that rifted margins have a more complex architecture, dominated by different and partly diachronous segments both along and across strike.Such segmentations may include very different fault geometries and structural styles, producing major variations in width and length of basins and highs as well as more fundamental and larger-scale variations with magma-poor and magma-rich segments.On a regional scale, passive margins may also be decorated with relatively narrow failed-rift basins that separate thicker and variably sized continental slivers or blocks from the adjacent continent.Such basins may be floored by stretched to hyperextended/hyper-thinned continental crust, transitional crust or embryonic oceanic crust.The Orphan, Porcupine and Rockall basins off-shore Newfoundland and the British and Irish Isles as well as the Norway Basin adjacent to the Jan Mayen continental ridge and the margin of Norway, are good present-day examples.Within the thinned crust, guyots may also be present, e.g. the Anton Dohrn and Hebrides seamounts or the Rosemary bank in the northern continuation of the Rockall trough."These seamounts are attributed to episodic magmatic pulses of the Iceland plume during opening of the North Atlantic.Another modern-day analogue of an across-strike, complexly structured, rifted margin is provided by the Red Sea detachment systems in Eritrea.Later inversion and incorporation of such complexly configured passive margins into a mountain belt, as discussed by Beltrando et al., results in a tectonostratigraphy with laterally changing nappe characteristics that may include previous extensional slivers of continental basement associated with hyperextended deep basins, sediments with or without spreading-related magmatism, and in several cases also exhumed hydrated/carbonated mantle peridotites.A structural succession with such characteristics does not easily comply with the traditional, belt-long, tectonostratigraphic correlations and the traditional nomenclature used in the Scandinavian Caledonides.Here, we describe and discuss a re-interpretation of the tectonostratigraphy of the South and South-Central Scandinavian Caledonides.A key area for the understanding of the architecture and re-interpretation of the tectonostratigraphy is where the southern magma-poor segment faces the northern magma-rich segment.We suggest that lithostratigraphic units, previously assigned to the Upper Allochthon and hence of suspect to outboard status have typical characteristics of magma-poor and magma-rich continental margins and ocean-continent transitions.In the Caledonides, these rocks are, however, variably overprinted by orogenic deformation and metamorphism.Nevertheless, many of their lithological characteristics are well-enough preserved to be compared with present-day passive margins and examples of fossil OCT zones in other mountain belts, for example the Alps and the Pyrenees.We show that a gradual transition from the magma-rich to the magma-poor segment was related to the formation of a large Jotun-type basement microcontinent/continental sliver and its termination in the Gudbrandsdalen area in central-south Norway.Furthermore, we also suggest that the nappes of Baltican affinity can be divided into rift domains that are well-established from present-day rifted margins, i.e. a proximal/necking domain, an extended domain; a distal/outer domain, and a microcontinent.In southern Norway, Late Proterozoic to Lower Palaeozoic continentally derived deposits locally lie unconformable on Baltican basement or on allochthonous crystalline rocks.These include the Osen-Røa, Kvitvola, Synnfjell, Valdres NCs.The Late Proterozoic successions are interpreted to represent proximal pre- and syn-rift sediments, which vary from fluvial to marine deposits.Marinoan and/or the younger Gaskiers glaciogenic deposits are present in several of these units.In some cases, the Neoproterozoic sediments are stratigraphically overlain by Cambrian to Lower Ordovician post-rift black-shale and carbonate successions, which in turn, are locally overlain by Lower to Middle Ordovician turbidites that grade from distal at the base to proximal at the top.In other areas, mostly in south-western Norway, the fossiliferous Cambrian–Ordovician overlies a glacially striated basement floor in Hardangervidda and are in turn overthrust by mica schists of unknown age.With the exception of the ~616 Ma Egersund mafic dykes, minor volcanics and dykes in the Hedmark basin and a horizon of basaltic volcanics on Hardangervidda, mafic magmatic rocks are absent in the Baltican basement and the Neoproterozoic–Ordovician succession in the foreland area of South-Scandinavia.The Lower Bergsdalen Nappe includes crystalline basement and Proterozoic metasediments, which are associated with metamorphosed basic to intermediate plutons and volcanics.Interleaved with the coarse-grained metasediments and magmatic rocks are phyllites and mica schists.The metasediments are mainly coarse-grained meta-arkoses and quartzites.Some of the granites in the crystalline sheets were dated by the Rb-Sr whole-rock method at 1274–953 Ma.Kvale interpreted the quartzites to be the oldest rocks of the Lower Bergsdalen Nappe because mafic and felsic magmas intrude into the metasediments.Consequently, the quartzites were interpreted to be pre-Sveconorwegian in age.The Lower Bergsdalen Nappe is positioned structurally above the Western Gneiss Region, a thin discontinuous cover of mica schists and allochthonous metasediments, which are possibly equivalent to the Synnfjell NC.It is structurally overlain by a unit of metasediments that contain a number of detrital and solitary metaperidotite bodies.The Lower Bergsdalen Nappe can be traced around the core of the Bjørnafjorden Antiform, as originally defined by Kvale.Between the Bergen Arcs and Lom a prominent metasediment-dominated complex, which contains numerous mantle-derived metaperidotite lenses and local clastic serpentinites, including detrital breccias, conglomerates, and sandstones has been mapped.The metasedimentary matrix is dominated by originally fine-grained sediments, now mica-schist and phyllite.Because of its mixed character, this unit has been non-genetically referred to as a mélange by Andersen et al., 2012; Jakob et al., 2017a, 2017b.However, to avoid confusion with other metaperidotite-bearing mélanges, such as those that have been formed at the plate interface in subduction zones and because of its resemblance with reworked OCT assemblages in other mountain belts, we refer to this unit as an OCT assemblage.The OCT assemblage structurally overlies the WGR and Lower Bergsdalen NC.From the Major Bergen Arc and around the Bjørnafjorden Antiform, it can be traced below both the allochthonous crystalline rocks of the Lindås NC, the Upper Bergsdalen NC, as well as the main ophiolite/island-arc nappe complexes of the Iapetus.From Stølsheimen across Sognefjorden, NE-wards to Lom, the same OCT unit has been mapped continuously below the western flank of the Jotun NC.The mostly pelitic metasediments also contain lenses of metaconglomerate and metasandstone as well as thin calcareous horizons, up to 40 km long and thin discontinuous sheets of Proterozoic gneisses, minor gabbro and granodiorite of Late Cambrian to early Middle Ordovician age and lenses of undated mafic rocks in the SW.Conglomerates and sandstones with a continental source indicate a Baltican-affine source.Detrital zircons show that sedimentation continued at least into the Middle Ordovician.Late Scandian syn-orogenic granitoids intrude both the metasediments of the Major Bergen Arc, including the OCT assemblage, as well as the Jotun and Lindås NCs in western Norway.The mafic and granitoid intrusives, which occur near the southern termination of the Jotun NC, in the Major Bergen Arc, and at Stølsheimen are unknown between Stølsheimen and Lom.The entire OCT assemblage experienced upper greenschist to amphibolite facies metamorphism during the Scandian Orogeny after ~430 Ma.Because of its characteristic lithological assemblage that resembles those of inverted magma-poor hyperextended margins in other orogens, the OCT assemblage is interpreted to have formed by pre-Scandian hyperextension and exhumation of subcontinental mantle or by the reworking of a magma-poor hyperextended rifted margin in the Ordovician.The Upper Bergsdalen NC represents a second sequence of allochthonous Baltican basement gneisses and metamorphosed continental margin sequences that are intercalated with Lower Palaeozoic phyllites and mica schists; similar to the Lower Bergsdalen Nappe.The southern part of the Upper Bergsdalen NC is structurally overlain by the Jotun NC, whereas south of Sognefjorden, the rocks of the Upper Bergsdalen NC trail out into the mica schists of the OCT assemblage.The metaperidotites of the OCT assemblage, however, consistently occur structurally below the Upper Bergsdalen NC.The lower thrust sheets of the Upper Bergsdalen NC are dominated by crystalline gneisses whereas the upper thrust sheets are mostly composed of metasediments that are locally associated with mafic igneous rocks.A meta-rhyolite of the Upper Bergsdalen NC was dated at 1219 ± 111 Ma.The magmatic history of the Upper Bergsdalen NC is apparently similar to that of the Lower Bergsdalen Nappe.However, a number of undated mafic sheets and dykes cutting metasediments occur in both of the Bergsdalen nappes, and their complete Proterozoic and younger,intrusive history is not yet known.The Blåmannen Nappe in the minor Bergen Arc is another sliver of basement-cover rocks that structurally overlies the OCT assemblage.It consists of allochthonous crystalline basement that is unconformably overlain by a sedimentary sequence, including a tillite, that is suggested to have been deposited in the Proterozoic.The Jotun, Dalsfjord and Lindås NCs are large nappes of crystalline basement of Baltican affinity, some of which are associated with or partly unconformably overlain by Neoproterozoic continental margin sequences, including the Turtagrø metasediments on the western flank of the Jotun NC and the Høyvik Group in the Dalsfjord NC.The crystalline rocks of these NCs are Mesoproterozoic in age and are dominantly anorthosite-mangerite-charnockite-granite magmatic rocks, which experienced high-grade metamorphism during the Sveconorwegian orogeny.Unlike the continental metasediments in the Osen-Røa, Kvitvola, Synnfjell and Valdres nappes, the Høyvik Group of the Dalsfjord NC contains a mid-ocean ridge-type mafic dyke-swarm and minor pillow-basalts at high stratigraphic levels.The Høyvik Group and the dykes were deformed and metamorphosed before the deposition of the Middle Silurian Herland Group.40Ar/39Ar cooling ages of phengitic mica in the Høyvik metasediments show that the deformation occurred before 447–449 Ma.The Herland Group metasandstones and metaconglomerates are unconformably overlain by the Sunnfjord obduction mélange and the ~443 Ma Solund-Stavfjord Ophiolite Complex.The Herland Group deposition and transgression, and the deposition of the Sunnfjord mélange is interpreted to herald the obduction and emplacement of the Solund-Stavfjord ophiolite.The Lindås NC is another AMCG basement nappe of Baltican affinity, which is structurally positioned above the OCT assemblage.The composition and age of the Lindås NC is similar to those of the Dalsfjord NCs.The north-western trailing end of the Lindås NC contains ~430 Ma eclogites indicating an early Scandian deep burial and metamorphism of the Lindås NC.Unlike the Dalsfjord NCs, the Lindås NC contains minor Late Scandian syn-orogenic granitoids.The Jotun NC is a large sheet of crystalline mostly AMCG rocks that is similar to those discussed-above.On its western flank, the Jotun NC includes highly strained metasediments, which are referred to as the Turtagrø metasediments.The Turtagrø metasediments are similar to the sparagmites of the Valdres NC and are apparently also free of syn-rift magmatic rocks.Locally, there are abundant Late Scandian syn-orogenic granitoid dykes intruding the Jotun NC.In this study, we treat the Jotun, Dalsfjord and Lindås NCs as a large composite unit due to their similar AMCG-lithologies, geochronological fingerprints, and tectonostratigraphic position structurally above the OCT assemblage and below the outboard nappes of Iapetus and Laurentian origin.The structurally highest Scandian thrust nappes of the SW Caledonides consist of a complex assemblage of ophiolite-island-arc and magmatic intrusive complexes.In the Major Bergen Arc the ~489 Ma Gullfjellet ophiolite was emplaced above the Lindås NC, as well as the OCT assemblages.The ophiolite/island-arc complexes occur again structurally above the Baltican-affine continental rocks between Hyllestad and Nordfjord, and structurally above OCT assemblages near the north-eastern termination of the Jotun NCs.The Dalsfjord NC and its sedimentary cover is structurally overlain by the ~443 Ma Solund-Stavfjord ophiolite, which was constructed on the remnants of early Ordovician ophiolite/island-arc in a back-arc basin setting.The Late Cambrian to Early Ordovician ophiolite island-arc complexes in the SW Caledonides are interpreted to have originated along the Laurentian margin of the Iapetus.They record a protracted history of subduction, arc-continent collision, volcanism and sedimentation, as well as Early-Caledonian metamorphism and deformation prior to Scandian thrusting of the nappes onto Baltica.In the South-Central Caledonides, the basement and minorautochthonous metasediments are exposed in a series of tectonic windows, including the WGR, the Atnsjøen Window and the core of the Skardøra Antiform.The structurally lowest nappes are the Osen-Røa and Kvitvola nappes, which preserve continental margin sequences that contain little to no syn-rift igneous rocks.A few isolated minor occurrences of tholeiitic basalt can be found stratigraphically overlying quartzites of the Osen-Røa NC.Towards the north-east into Sweden, theseautochthonous and allochthonous continental margin successions can be correlated with the Dividal Group and the Risbäck NC.Structurally above the proximal syn-rift to post-rift sediments of the Osen-Røa and Kvitvola NCs is a series of crystalline basement gneisses that can be traced from Norway across the Skardøra Antiform into Sweden.In Sweden, east of the Skardøra Antiform, the gneisses are referred to as the Tännäs Augen Gneiss.The Tännäs Augen Gneiss is Mesoproterozoic in age and is locally mylonitised along tectonic contacts at its base and top.These gneisses are apparently without Ediacaran syn-rift intrusives.West of the Skardøra Antiform, the gneisses can be traced as a thin band, at a consistent tectonostratigraphic level, along-strike into the Gudbrandsdalen Antiform.Here, the gneisses are referred to as the Høvringen Gneiss Complex, Rudihø Crystalline Complex and Mukampen Suite.The allochthonous gneisses of the Rudihø and the Mukampen Suite are 1700–1200 Ma and experienced high-grade metamorphism associated with some magmatism at 920 to 900 Ma.A late tonalitic dyke cutting the Mukampen Suite was dated at ~430 Ma.Similar to the Tännäs Augen Gneiss east of the Skardøra Antiform, no Ediacaran syn-rift intrusives have been reported from these gneisses.Thus, the gneisses at Høvringen, Rudihø and Mukampen are similar in age, composition and metamorphic history to the Tännäs Augen Gneiss and some of the large crystalline nappes of the South Norwegian Caledonides.Structurally above the sheets of Baltican basement gneisses are Neoproterozoic metasedimentary complexes in the Särv and Seve NCs, as well as in the Hummelfjell and Heidal Groups.The Särv and the structurally overlying Seve NCs comprise Neoproterozoic pre- to syn-rift continental margin sediments that contain a large volume of rift-related mafic dykes and local volcanics.The Särv and Seve metasediments experienced greenschist facies Scandian metamorphism in the east and an increase in Scandian metamorphism towards the west.Regionally, the Seve NC experienced diachronous amphibolite to eclogitehigh-pressure metamorphism in the Early to Late Ordovician.The mafic dyke swarms and plutons in the Särv and Seve NCs, including the ~596 Ma Ottfjället Dyke Swarm, are interpreted to represent Iapetus break-up magmatism.Regional studies of the Seve NC in Central and North Sweden show that pre-Caledonian continental margin-type metasediments in most parts are densely intruded by pre-Caledonian, Ediacaran mafic dyke swarms.These complexes are interpreted to represent the magma-rich segment of the Baltican rifted margin.The regional geochemistry of the ~1000 km long Scandinavian Dyke Swarm indicates that formation of the melts was related to a large igneous province formed by a mantle plume associated with the Central Iapetus Magmatic Province.The Seve NC also contains a number of solitary metaperidotite bodies and detrital serpentinites, and the Ediacaran OCT is considered to be represented by the upper sections of the Seve NC.The Särv and Seve NCs can be correlated with the Hummelfjell and Heidal Groups in Norway.The Hummelfjell and Heidal metasediments are mostly composed of Neoproterozoic quartzites and meta-arkoses that locally grade upwards into metapelites, and experienced similar metamorphic conditions as the adjacent Seve NC in Sweden.The metasediments of the Hummelfjell Group contain a number of undated mafic intrusives and volcanics, which traditionally have been correlated with the rift-related igneous rocks in the Särv and Seve NCs.The number and volume of mafic igneous rocks within these Neoproterozoic successions decrease in south-westerly direction from the Särv and Seve NCs towards the Heidal Group.However, some mafic intrusives are reported from the upper sections of the Heidal Group.Gjelsvik also reported granitoid dykes cutting the mafic intrusives within the Heidal Group.However, none of these rocks have yet been dated.Between Vågåmo and the Skardøra Antiform, metaperidotite-bearing metasediments structurally above the Heidal and Hummelfjell groups are referred to as the Sel Group and Aursunden Group.A lithological assemblage similar to those of the Sel and Aursunden groups also occurs in the Einunnfjellet Dome area overlying Neoproterozoic quartzites correlated with the Hummelfjell Group.The mica schist matrix of the metaperidotite-bearing complexes between Vågåmo and the Skardøra Antiform are similar to the OCT assemblages further southwest, and contain both solitary and detrital metaperidotites, siliciclastic metaconglomerates and metasandstones as well as layers and lenses of turbidite-deposits.A major difference to the OCT assemblages between Stølsheimen and Lom is that the metaperidotite-bearing complexes between Vågåmo and the Skardøra Antiform also contain a large number of metamorphosed mafic bodies of unknown age.South of lake Rien, an undeformed quartz diorite pluton intrudes schists of the Aursunden Group and contains xenoliths of the surrounding schists.Similar to some of the Late Scandian granitoids in the Major Bergen Arc the granitoid at Rien contains euhedral magmatic epidote indicating emplacement of the granitoid at pressures above 4 kbar.The Sel Group in the Gudbrandsdalen Antiform contains numerous discontinuous lenses of monomict detrital serpentinites.Near Otta, one locality also hosts an island-type Dapingian–Darriwilian fauna, which shows that sedimentation at this stratigraphic level took place in the Early–Middle Ordovician.The Aursunden Group is also suggested to be of Cambrian–Ordovician age.Both the Sel and the Aursunden Group are considered to have been deposited on the uppermost metasediments of the Heidal Group as well as on sheets of mafic crystalline rocks at the base of the Ordovician metasedimentary complexes, which, in turn are supposed to have tectonic contacts with the Heidal and Hummelfjell Groups below.Apparent depositional contacts between the metaperidotite-bearing complexes and the units structurally below are exposed, e.g., at Vågåmo and Hornsjøhøe.The rocks of the Trondheim NC are dominated by three separate tectonic units, i.e. the Støren, Gula and Meråker nappes, all of which are composed of oceanic, ophiolite and island-arc assemblages.All units of the Trondheim NC are intruded by ~440–430 Ma bimodal plutons.The Silurian plutons also intrude and are associated with older Cambrian–Ordovician ophiolite/arc rocks, e.g. in the Trondheim area or near Folldal, and with low-grade sediments containing Laurentian fossils as well as a Middle Silurian trondhjemite pluton, which contains inherited zircons of Archean age.Thus, the plutonic history of the Trondheim NC is similar to that of the ophiolite/island-arc complexes in the SW Caledonides.The Gula nappes were commonly believed to be of Baltican origin.However, the assumption of a Baltican origin of the Gula nappes was founded on Tremadocian graptolites and similarities in trace element geochemistry of black shales from the Gula nappes and the Cambrianautochthon of Baltica.The Rhapdinopora flabelliformis sociale fossils in the so-called Dictyonema shales of the Gula nappes have also been described from the Tremadoc in Argentina, China, Belgium, and Newfoundland and are considered to be near cosmopolitan.Similarly, the high contents of V, Mo and U in black shales from the Gula nappes and theautochthonous Cambrian–Ordovician of Baltica are rather indicators for the depositional environment than of provenance.Because of the lack of evidence for unequivocal Baltican origin and the common intrusive history in all nappes of the Trondheim NC as well as the faunal indications for a Laurentian affinity in the western Trondheim NC, we consider the entire Trondheim NC to be exotic with respect to Baltica.Sturt et al. followed by Nilsson et al. suggested that the Sel and Aursunden groups are also unconformable on the Gula nappes.However, an unconformity below the Ordovician metasediments and on top of the Heidal and Hummelfjell groups, as well as their continuation into the Särv and Seve nappes, would stich the continental margin successions together with the oceanic assemblages of the Trondheim NC nappes as early as the Early–Middle Ordovician.The presence of such a terrane-link would require moving the Neoproterozoic metasediments of the Heidal and Hummelfjell Groups and as well the Ediacaran sediments of the Särv and Seve nappes structurally below the Trondheim NC before the deposition of the Sel and Aursunden groups.Moreover, the Laurentian, Baltican and Celtic faunas were highly diverse at the time of the deposition of the OCT assemblages in the Early-Middle Ordovician and did not unify before the Wenlock.We therefore consider it to be highly unlikely, and not demonstrated that the Sel and Aursunden Groups are unconformable on the rocks of the Trondheim NC in the Early-Middle Ordovician.Because the Jotun as well as the Dalsfjord and Lindås NCs display similar AMGC lithologies and geochronological histories as the Baltican craton, these nappes are all considered to have a Baltican ancestry.However, because the Jotun and Lindås NCs structurally overlie the OCT assemblage, in theory, they could have been detached from the Baltican plate in the Ediacaran and moved independently throughout the Cambrian–Ordovician, or they may even be exotic with respect to Baltica and originate, e.g., from Gondwana or Laurentia.Unfortunately, there are no palaeomagnetic or other constraints on the Cambrian–Ordovician latitudinal position of these units and no fossils have been described from the metasediments associated with the crystalline rocks of the Jotun, Dalsfjord or Lindås NCs.Therefore, their plate tectonic history remains partly speculative and can only be inferred based on the lithological/geochronological dataset, its tectonic relationships with the other nappes and the lithostratigraphic correlations along the mountain belt.An outboard origin of the large crystalline nappes of Southern Scandinavia would require that these rocks were near the leading edge of the upper plate during the Cambrian–Silurian, either near the Laurentian margin or the peri-Gondwanan terranes, see e.g. Domeier for a review of the closure of the Iapetus.However, the lack of magmatic arc rocks in the crystalline basement nappes of Southern Norway and the occurrence of Late Ordovician to early Silurian metamorphic rocks including eclogites in the Dalsfjord and Lindås NCs suggest that these NCs were part of the lower plate, i.e. Baltica, during the closure of the Iapetus.Similar problems arise if we consider a scenario where these large basement nappes rifted off Baltica in the Ediacaran and subsequently moved independently of Baltica for ~200 million years, until the Scandian Orogeny, just to be juxtaposed with each other and Baltica after the continental collision in the Silurian.In this context it is important to consider that the Iapetus did not close by simple orthogonal convergence, but involved large-scale clockwise and counter-clockwise rotations of Baltica as well as major changes in plate-motion directions throughout the Cambrian to Silurian.The radiometric ages of magmatic and metamorphic minerals from the crystalline nappes are similar to those of the Baltican autochthonous basement and are of Gothian age.The Gothian autochthonous domains closest to the crystalline nappes lie partly to the NE of the present-day position of the crystalline nappes.A SE directed emplacement of these NCs agrees with the Scandian kinematics and may indicate that the Baltican craton continued to the NW beyond the present-day North-Atlantic continental margin.Such a pre-Caledonian continuation of Baltica has previously been suggested by, e.g., Lamminen et al., but is inherently difficult to test due to pervasive overprint by Caledonian Orogeny and the limits of the present-day continental margin.Although a direct causal relationship is difficult to demonstrate, the NE termination of the Jotun NC near Vågåmo correlates remarkably well with a major NW-SE trending change in the Baltican basement structure, which coincides with a Sveconorwegian lineament across southern Scandinavia.Whereas the outboard Caledonian nappes continue across this boundary, the transition from a magma-rich to a magma-poor domain also coincides with this lineament.We suggest that the magma-rich to magma-poor as well as the termination of the very large Baltican basement NCs both represent primary features of the pre-Caledonian margin of Baltica that most likely were inherited from the Middle Proterozoic structure of Baltica, which has been surprisingly little discussed in the large-scale architecture of the Scandinavian Caledonides.By using the OCT assemblage as a reference level in the tectonostratigraphy, a first order architecture of the Pre-Caledonian margin of Baltica can be deduced by “unstacking” the nappes.Orthogneisses and metasediments of the Jotun NC structurally overlie the OCT assemblage on the western side of the Jotun NC.The OCT assemblage can be traced into the Gudbrandsdalen area where it structurally overlies the Heidal Group and sheets of basement gneisses, which, in turn, structurally overly the proximal rift-basins, including the Osen-Røa, Kvitvola, Synnfjell and Valdres NCs.Therefore, before the inversion of the Caledonian margin of Baltican, a basin, which was floored by transitional crust, separated the proximal basins and thinned continental crust to the SE from the rocks of the Jotun NC.Whereas, the Neoproterozoic metasediments of the proximal basins structurally below the Jotun NC contain no syn-rift igneous rocks, the rocks of the Høyvik Group and the orthogneisses of the Dalsfjord NC contain mafic dykes and pillow basalts.We suggest that the dyke swarm in the Høyvik Group and other correlative units of the Dalsfjord NC indicate that these rocks represent the ocean facing NW magma-rich rifted segment of a crystalline block outboard of the OCT assemblage.The distal position with respect to Baltica is also indicated by the Middle Ordovician deformation and metamorphism which affected the Dalsfjord-Høyvik basement-cover pair before ~449 Ma, whereas the proximal part of the rifted margin in the south apparently was little affected by this event.With regard to the points discussed above, we support the model that interprets the large crystalline basement nappes of South Norway as a former microcontinent.Although separated from the main Baltica continent by hyperextension and formation of the magma-poor OCT-unit, it still formed part of the Baltican lithospheric plate in the period between the Ediacaran and the Silurian.An outboard palaeoposition of these continental blocks was already suggested by Andersen et al. and Jakob et al., who interpreted these continental units as part of a microcontinent or continental sliver, referred to as the Jotun Microcontinent.We suggest that the microcontinent included the Dalsfjord, Lindås, and Jotun NCs, if not all the large AMCG–nappe complexes in Southern Norway.Almost all the Neoproterozoic sedimentary sequences that are structurally above the sheets of allochthonous Baltican basement, including the Lower Bergsdalen NC, Tännäs, Høvringen, Rudihø and Mukampen gneisses, contain mafic igneous rocks.Because, these metasediments are interpreted to represent Meso–Neoproterozoic pre- to syn-rift sediments that were deposited on the thinned Baltican craton, the mafic intrusions within these sedimentary sequences must be younger.The rocks of the Heidal Group, Hummelfjell Group, as well as the Särv and Seve nappes can be correlated by their petrology, depositional age, and tectonostratigraphic position.Several of them contain diamictites interpreted as tillites and some also contain newly discovered stomatolites.Therefore, we follow the classical interpretations of Törnebohm and Holmsen, that the mafic intrusives in the Neoproterozoic sequences of the Hummelfjell Group can be correlated with those in the Särv and Seve NCs, which were emplaced by LIP–magmatism at ~605–596 Ma.The regional correlation of these units results in a relatively simple tectonostratigraphy for the South-Central Caledonides, i.e. from base to top: 1) basement with cover; 2) Neoproterozoic metasediments mostly without mafic igneous rocks; 3) a level of thin allochthonous basement gneisses; 4) Neoproterozoic metasediments with mafic igneous rocks; 5) Cambro-Ordovician metasedimentary complexes with abundant meta-peridotite bodies; and 6) the outboard ophiolite/island-arc complexes, including the Trondheim NC.Although masked by some additional complexities, this simple tectonostratigraphy of the South-Central Caledonides can also be recognised in the South Caledonides.The structural position of the Upper Bergsdalen NC between the Jotun NC and the metaperidotite-bearing metasediments, suggests that it originated outboard of the transitional crust basin.Because the Upper Bergsdalen NC trails out into the Ordovician OCT assemblage near Sognefjorden, it apparently was also separated from the Jotun Microcontinent; at least during the shortening of the margin but perhaps since the Ediacaran.Because of the presence of a mafic dyke swarm in the Høyvik Group of the Dalsfjord NC and a lack of mafic intrusions in the units structurally below the Jotun NC and in the metasediments of the Blåmannen Nappe, we suggest that the magma rich margin of Baltica was diverted to the outboard side of the Jotun Microcontinent.Because of the similarities of the Neoproterozoic succession, it is possible that mafic dykes in the quartzites of the Lower and Upper Bergsdalen NCs may also have been emplaced during the Ediacaran; although this remains to be substantiated by radiometric dating.As discussed above, the rift-inherited domains of the Pre-Caledonian margin along strike of the orogen include a magma-rich part preserved in the Särv and Seve NCs a magma-poor part that is presently structurally below the remnants of the Jotun Microcontinent.The Neoproterozoic continental margin successions between the magma-rich part in the north-east and the magma-poor part in the south-west are characterised by a south-westerly decrease in the abundance of syn-rift mafic plutons and volcanics.We interpret this progressive reduction of mafic igneous rocks in the Hummelfjell and Heidal groups to represent a magma-rich to magma-poor transition zone that stretches for about 200 km from the Särv and Seve NCs to the north-eastern termination of the Jotun NC.It is also noteworthy that the transition also coincides with the pre-rift Sveconorwegian lineament parallel to Gudbrandsdalen.The radiometric evidence as well as the pre-deformation and metamorphic relative ages of the mafic intrusives within the Seve and Särv NC, which correlates with the Hummelfjell Group, demonstrate that the magma-poor to magma-rich transition zone is a primary rift-inherited feature of the Central Caledonides and Ediacaran in age.The correlation of the tectonic units in the South and South-Central Caledonides presented above is further corroborated by the continuity of the peridotite-bearing OCT assemblages from the Bergen Arcs to the Skardøra Antiform.The cross sections A to C demonstrate the consistent organisation of the nappe complexes.However, a complexity is added by the presence of the Jotun Microcontinent and the Bergsdalen NCs in the SW.The Neoproterozoic successions are not present between Stølsheimen and Lom.It is likely that these units were excised by the post-orogenic extension during exhumation of the WGR.However, the Cambrian to Ordovician metaperidotite-bearing OCT metasediments can be traced almost seamlessly between Bergen and the Skardøra Antiform.These Cambrian–Ordovician units can be correlated by the litho- and tectonostratigraphy and also a continuous metamorphic signature as well as their depositional age across the Gudbrandsdalen Antiform.Whereas the depositional and magmatic history of the Neoproterozoic metasedimentary complexes is relatively well-constrained, the origin and significance of the Cambro–Ordovician OCT assemblages is more uncertain due to the paucity of datable rocks in this unit.For the origin of the OCT assemblages three key characteristics must be addressed: the resemblance with other OCT assemblages; the duration of deposition of the matrix sediments into the Middle Ordovician and the intrusion of minor mafic to granitoid plutons dated at 487–471 Ma.Two scenarios for the formation of the metaperidotite-bearing metasedimentary units might be proposed: The OCT assemblage was formed by reworking of an older Ediacaran basin and OCT zone in the Late Cambrian to Middle-Ordovician; or it was formed by thinning of the crust in the Late Cambrian to Middle Ordovician, which was accompanied or followed by minor intrusions.In the first scenario, the reworking of an older OCT zone assemblage may have been linked to compression along the Baltican margin in the Late Cambrian to Middle Ordovician.The reworking of transitional crust inboard of the Jotun Microcontinent was accompanied by the emplacement of minor mafic to felsic igneous rocks into older sediments at 487, 476 and 471 Ma and continued sedimentation with detrital zircons as young as 468 Ma into the Dapingian–Darriwilian.Resetting of zircons at 482 Ma in the Øygarden basement window west of the Lindås NC may also be linked to this event.Because, there is no radiometric evidence for Pre-Scandian penetrative deformation and metamorphism in the Baltican autochthon of South Norway except at Øygarden, the Early-Caledonian reworking likely involved only the outermost part of the Baltica margin, including nappes that comprise the OCT in the magma-rich part of the margin, e.g. the Seve NC, and along the western margin of the Jotun Microcontinent.Other indications for compression, uplift and erosion along the Baltican margin in the Early Ordovician are provided by 482 Ma eclogites in the northernmost Seve NC, the occurrences of turbidites that overly and are intercalated with Early–Middle Ordovician metapelites, which also include the Cr- and Ni-rich Elnes Formation in the Oslo region, the Föllinge Formation in Sweden and Cambrian–Ordovician successions of the proximal basins.Moreover, from the Gudbrandsdalen area towards the north-east, the OCT assemblage contains an increasing number of mafic bodies.Thus, the Ordovician units may reflect the increase of mafic igneous rocks of the underlying Neoproterozoic successions and may further support the notion that the metasedimentary complexes between Gudbrandsdalen and the Skardøra Antiform represent the remnants of the reworked outermost rifted margin of Baltica.However, except for one 618 Ma garnet no Ediacaran crystallisation ages have been reported from the OCT assemblage.The closure of the OCT basin inboard of the Jotun Microcontinent and the reworking of the OCT assemblage, is comparable with the closure of narrow oceanic basins in the Alpine Tethys realm as described by Chenin et al.The difference in style of the pre-Scandian deformation and metamorphism in the South and the Central Caledonides may be directly linked to the presence of the large, strong and mostly intact Mesoproterozoic continental crust of the Jotun Microcontinent, which thwarted pre-Scandian deep burial and deformation compared to deep burial and high-pressure metamorphism of rocks in the Seve NC and along the westernmost Dalsfjord-Høyvik area.However, except for the 618 Ma garnet, no other rocks in the OCT assemblages yielded Ediacaran ages that could be linked to the opening of the Iapetus whereas Lower Ordovician ages abound.And, because, the minimum age of some of the peridotite-bearing metasediments pre-date a minor 487 ± 2 Ma gabbro in the Bergen Arcs, these assemblages may have formed during a second phase of rifting in the Cambrian to Middle Ordovician.A modern-day analogue for this scenario could be the Tyrrhenian basin, that opened in the Pliocene–Quaternary during a phase of hyperextension and rifting after initial phase of opening of the Sardinia Province Basin in the Oligocene–Miocene.However, the Tyrrhenian opened in an upper plate, back-arc setting, for which there is little evidence in the Caledonides.None of the Baltican nappes are associated with an arc of that age and the metamorphism in the Høyvik-Dalsfjord and Seve NC rather indicate a lower plate configuration for the distal margin of Baltica.The OCT assemblage may also have formed by thinning of a forearc basin and subsequent obduction of the Ordovician units onto the Ediacaran sequences.Forearc extension has been suggested for the highly dismembered south Tibetan ophiolites.However, because of the lack of evidence for an arc along the Baltican margin of that time, the Baltican affinity of discontinuous slivers of crystalline gneisses within the OCT assemblage metasediments and the structural position of the large crystalline NCs, it is difficult to explain the formation of the OCT assemblages in a forearc setting.As an alternative to an upper plate configuration of Baltica, the OCT assemblage may have been formed with Baltica being the lower plate.On these terms the second stage of rifting and thinning may also have been related to the subduction of the northern part of the Baltica margin at about 482 Ma and may be comparable with the opening of the South China Sea.The main large-scale nappe translation onto Baltica took place during the final continent-continent collision, and the penetrative deformation andHP metamorphism of the Baltican basement occurred in the Late Silurian to Early Devonian, as demonstrated by the continuous SE-NW metamorphic gradient along the floor thrust and into the WGR.The outermost parts of the Baltican margin, however, may have experienced shortening as early as ~450 Ma.In Figs. 6 and 7 the palaeogeographic position of the basin with the OCT assemblage is constrained by the island-type Otta fauna, for which we estimate a minimum distance to the Baltican craton of about 1000 km, a distance great enough for the Otta fauna not to mix with the Baltican cratonic faunas.The Jotun Microcontinent is estimated to have had a minimum size of about 200 × 300 km based on the present extent of the Jotun, Lindås and Dalsfjord NCs.Thus, the distance between the outboard margin of the Jotun Microcontinent and the cratonic margin of Baltica was in the order of 1200 km.Palaeo-plate tectonic models for the closure of the Iapetus Ocean indicate that a far outboard Jotun Microcontinent inboard of a seaway as well as hyperextended to rifted segments would have been in contact with the Laurentian cratonic margin at ~450 Ma.The arrival of the Jotun Microcontinent at the Iapetan/Laurentian subduction zone is constrained by the deformation of the Høyvik-Dalsfjord and Seve NC at ~450 Ma as well as by the eclogitisation of the Lindås NC at ~430 Ma.The age constraints for the Scandian deformation are based on the obduction and thrusting of the ~443 Ma Solund-Stavfjord back-arc ophiolite onto the fossil-bearing Wenlockian Herland Group.The shortening of the thinned margin was completed at the time the two necking domains of the Laurentian and Baltican continents collided, which coincided with the cessation of subduction-related magmatism, the earliest subduction of the WGR and the emplacement of syn-collisional granitoids in Baltican and Laurentian nappes, including the 430–415 Ma granitoids in the Norwegian allochthons and 435–415 Ma granitoids on Greenland and Svalbard.The shortening of ~1200 km of the Baltican margin between ~450 and 435 Ma would have required convergence rates between the Laurentian and Baltican cratons of about 8 cm/yr, which is well within the limits of those of published plate tectonic models.Crustal thickening with maximum burial of the WGR and the thrusting onto the foreland, however, continued into the Lower Devonian.The eclogites in mafic dyke swarms hosted by the continental sediments of the Seve NC in Jämtland, Sweden, indicate that this part of the Seve NC was in a similar outboard position as the Jotun Microcontinent.The even older HP metamorphic ages in the Seve NC further north may indicate that the onset of deformation along of the Baltican margin was oblique and diachronous, and that the northern part of the Baltican margin was affected before the segments in the south.However, Early–Middle Ordovician faunas of Laurentia, Baltica and at Otta are distinct and the Iapetus probably was at its widest at this time.Therefore, alternative to an Early-Middle Ordovician incipient oblique closure of the Iapetus, the outermost Baltica margin may have experienced a collision,in the late Cambrian–early Middle Ordovician.However, direct evidence for an arc arriving at the pre-Caledonian Baltica margin at that time is lacking.It is commonly suggested that rift-inherited structures in continental margins are reactivated during collision and have paramount influence on the architecture in mountain belts.It is therefore, important to identify possible rift-inherited structures in the Caledonides and to include those into the tectonic evolution of the orogen.The rift-inherited magma-rich and magma-poor segments are linked by a strike-parallel transition zone of approximately 200 km width.Rift inheritance is also seen in transverse sections of the mountain belt.The consistency of the tectonostratigraphy and the characteristic lithological assemblages within the main tectonic units play a key-role in this interpretation.In particular, the sediment-hosted metaperidotite-bearing assemblages represent a ‘marker horizon’ that links the South-West with the Central Caledonides.This OCT zone-remnants are at a consistent structural level and allow for a re-interpretation of the across-strike architecture of the mountain belt.The traditional use of Lower, Middle and Upper Allochthon is inadequate as previously outlined by Corfu et al., because, the tectonostratigraphy is inherited from the highly irregular rifted margin and is not a result of shortening of a continuous and uniform rifted margin.Therefore, the nappe stack is better described in terms of rift-domains defined by comparison with present-day margins, including the proximal and necking domains and as well as hyperextended and distal domains, with or without major magmatic components.The proximal/necking domain of the Scandinavian Caledonides includes theautochthonous and allochthonous Neoproterozoic successions that contain little to no syn-rift magmatism, e.g. the Osen-Røa, Synnfjell, Dividal and Risbäck NCs.These proximal rift basins record a dominantly siliciclastic input until the occurrence of minor mafic plutons and volcanics.After the early rift-phase and minor mafic magmatism, the sediment system changes from siliciclastic dominated to carbonate and carbonate-shale dominated.Similar carbonate and carbonate-shale successions are also reported from the rift basins of eastern Laurentia, which indicate a comprehensive rift-wide change of the system.Relative changes in sea level move the sedimentary depo-centres either continent-ward or ocean-ward during transgression or regression events, respectively.However, changes in the tectonic activity comprehensively changes the sediment influx into the rift system.For example, the cessation of tectonic activity in proximal rifted-margin basins is believed to coincide with and to be linked to the development of so-called thinning faults due to localization of extension in the future necking and distal domains and the onset of lithospheric break up.Therefore, the contemporaneous occurrence of carbonate and shale formations, immediately after a phase of minor mafic magmatism, that seal the previously deposited, siliciclastic, main-rift sequences in many proximal rift basins along the Baltican and Laurentian margins, may indicate the cessation of tectonic activity within these proximal basins.We suggest that the proximal basins record an early rift-phase of initial distributed extension until the localization of extension in the future necking and distal domains and that the localization of extension was broadly contemporaneous with the syn-rift magmatism.With the exception of the nappes comprised of continental metasediments structurally below the Jotun NC, the proximal basins are consistently overthrust by a series of thin crystalline basement nappes with Baltican affinity.A simple restoration of these nappes require that these gneisses originally were positioned outboard of the proximal domain of the margin.Moreover, their consistent structural position indicates that they represent a regional structural element in the continental margin rather than local imbrications.In present-day passive margins, the hyperextended domain is positioned between the necking domain and the zone of exhumed mantle in magma-poor margins, or inboard the zone of main syn-rift magmatism in magma-rich margins.Because there is little evidence for Ediacaran magmatism reported from these basement nappes, and because of their structural position between the proximal basins and the Neoproterozoic successions, in which syn-rift igneous rocks abound, we suggest that these gneisses represent rift-inherited thinned continental crust, that were outboard of the necking domain after rifted margin formation.In the magma-poor to magma-rich transition zone and the magma-rich segment of the margin, these gneisses are overlain by Neoproterozoic successions containing abundant syn-rift magmatic rocks, which we interpret as the distal domain of the rifted margin.In the magma-poor segment of the margin the distal domain is characterised by metaperidotite bearing units that are dominantly composed of fine grained metasediments but also include coarser grained metasediments and slivers of continental crust.In the South Caledonides, those distal domain assemblages are structurally overlain by the Jotun microcontinent.By comparing our observations from the Caledonides with studies in the Alps, we find that structures inherited from the rifted margins were reactivated and developed as major 1st order thrust systems during the orogenic shortening of the Baltica margin.An imbrication of the rift domains was likely accommodated by smaller 2nd order thrusts exploiting discontinuities within the units, e.g. changes in rheology or along rift-inherited faults.The thrusting during the main orogenic events which probably were separated in time was apparently in sequence, because, the stacking-order of Baltican nappes reflects cross-sections of the pre-Caledonian margin.Therefore, for simplicity, the shortening of the Baltica margin is in the following depicted as a single phase of shortening, neglecting possible pre-Scandian tectonism and metamorphism of the outermost margin.In the Caledonides, nappes that contain the outermost margin of Baltica including the OCT, the extensional basement allochthons, exhumed meta-peridotites, probably also embryonic oceanic crust at Vågåmo and Røros, as well as other dismembered ophiolites were emplaced onto the Neoproterozoic successions that host the rift-related mafic dyke swarms.Consecutively, the assemblages of the magma-rich and the magma-rich to magma-poor transition zone were thrust over thinned continental crust of the distal domain.The nappes of the distal domain were, in turn, thrust over the Neoproterozoic successions of the proximal/necking domain by a thrust system, which may represent the reactivated thinning faults of the necking domain.Internal imbrication of the individual domains was accommodated by sub-sets of thrust with smaller offsets.New data and field observations as well as re-interpretations based on a modern understanding of present-day continental margins, put new constraints on the evolution and architecture of the pre-Caledonian margin of Baltica.We suggest that the major differences along strike in the mountain belt originated by the highly irregular and discontinuous template related to the formation of the pre-Caledonian margin of Baltica.The most important change occurred where the large Jotun Microcontinent rifted away from Baltica in the Neoproterozoic.The NE-termination of the microcontinent may have been inherited from a Middle Proterozoic basement structure, because, the termination of the crystalline nappes correlates with the trace of the Sveconorwegian lineament across southern Scandinavia.This structure appears to be a fundamental lithospheric lineament in Scandinavia as seen by the change from shallow to deeper MOHO from SW to NE as well as magnetic anomaly studies.This pre-Caledonian lineament also coincides with the magma-poor to magma-rich segmentation along the continental margin as described-above.We suggest that large-scale discontinuities in the Sveconorwegian basement across south Scandinavia were important structural elements both during the construction of the pre-Caledonian margin of Baltica as well as during the Caledonian plate-convergence and Scandian collision.This study shows that the present-day tectonostratigraphy of the South and South-Central Caledonides was formed by the orogenic shortening of a highly irregular, Ediacaran, pre-Caledonian, rifted margin of Baltica.The nappe stack from its base to the top reflects a cross section from proximal to distal rift domains.A summary of observations and interpretations presented above include:After the post-Sveconorwegian assembly of Rodinia, followed a long period of attempted continental rifting, widespread stretching of the shield area and deposition of thick sedimentary successions through the Cryogenic and into the Ediacaran as described by Nystuen et al.The continental break-up and the eventual formation of the pre-Caledonian continental margin of Baltica may have been associated with the arrival of a mantle plume and widespread plume-magmatism at ~615–595 Ma.Most of the pre-Caledonian margin of Baltica facing the Iapetus Ocean, including the less-well preserved westernmost margin of the Jotun Microcontinent was apparently magma-rich.However, inboard of the Jotun Microcontinent opened a magma-poor basin and seaway that was floored by hyperextended to transitional crust.Rift-related mafic igneous rocks have not been identified in this basin or in the adjacent autochthon of Baltica except for the mafic ~615 Ma dyke swarm in the Egersund area.The along-strike transition from the magma-rich to the magma-poor part took place over an approximately 200 km long orogen-parallel zone between Røros and Vågåmo.This magma-rich to magma-poor transition zone is preserved in the Neoproterozoic successions of the Hummelfjell and Heidal Groups, which represent a continuation of the Särv and Seve NCs into Norway.Additional elements of the magma-rich to magma-poor transition are the incipient formation of oceanic crust in the OCT zone, which locally may be preserved between Vågåmo and Røros as well as in the continuation of OCT assemblage into Sweden.A poorly-understood early subduction and shortening of the outermost Baltica margin may have occurred already during latest Cambrian to the Middle Ordovician and affected mainly the Seve NC.This pre-Scandian event may have been associated with or coincident with the reworking of the older hyperextended margin or a second phase of extension in the South Caledonides.Major shortening of the Baltican margin started at about 450 Ma when the outermost parts of the very wide Baltican rifted margin entered subduction zone in front of Laurentia.Deformation in the proximal/necking domains as well as the large-scale nappe translation over the Baltican craton, Scandian metamorphism and associated granite magmatism took place during the Scandian Orogeny in the late Silurian into the Early Devonian.The across-strike architecture of the nappe stack can be attributed to the stacking of rift domains.In the Central Caledonides, the stacked rift domains, from top to base, include the distal margin with the fossil OCT and break-up magmatism, remnants of the hyperextended domain and proximal rift basins.In the South Caledonides, the nappe stack also includes the Jotun Microcontinent thrust over the remnants of a failed rift hyperextended basin, floored by transitional crust.In the NE it is transitional into magma-poor to magma-rich transition zone and overlies the proximal Neoproterozoic basins.In the SW, the Upper and Lower Bergsdalen NCs, near the southern termination of the Jotun Microcontinent, were originally outboard and inboard of the hyperextended basin, respectively, and all units were thrust over the proximal basins.All of the rift-inherited tectonic units are structurally overlain by the outboard nappes with origins in the Iapetus and Laurentia.
Interpretations of the pre-Caledonian rifted margin of Baltica commonly reconstruct it as a simple, tapering, wedge-shaped continental margin dissected by half graben, with progressively more rift-related magmas towards the ocean-continent transition zone. It is also interpreted to have had that simple architecture along-strike the whole length of the margin. However, present-day rifted margins show a more complex architecture, dominated by different and partly diachronous segments both along and across strike. Here, we show that the composition and the architecture of the Baltican-derived nappes of the South and South Central Scandinavian Caledonides are to a large extend rift-inherited. Compositional variations of nappes in similar tectonostratigraphic positions can be ascribed to variations along-strike the rifted margin, including a magma-rich, a magma-rich to magma-poor transition zone, and a magma-poor segment of the margin. The architecture of the nappe stack that includes the Baltican-derived nappes was formed as a result of the reactivation of rift-inherited structures and the stacking of rift domains during the Caledonian Orogeny.
432
A study on the sensitivities of simulated aerosol optical properties to composition and size distribution using airborne measurements
Atmospheric aerosols affect the Earth's climate both directly, through the scattering and absorption of radiation, and indirectly, via changes to cloud microphysics and properties.Moreover, aerosols also affect visibility and air quality, as well as human health.In order to estimate the direct effect, climate models generally require aerosol optical properties such as the extinction coefficient, the single scattering albedo and the asymmetry parameter.For these, they need to quantify first the spectral refractive index, the size distribution, the hygroscopicity and the mixing state of atmospheric aerosols.Each of these properties is a complex function of aerosol size, composition, and chemical and physical processing.Thus, due to this complexity of the atmospheric aerosols, we need to use models and measurements combined together in order to provide the information needed in climate models.Closure between the measured aerosol scattering and absorption and that calculated with a scattering code using chemical composition and particle size information has been attempted before by several studies.However, recent additions to the instrumentation aboard the Facility for Airborne Atmospheric Measurements BAe-146 aircraft have made possible the measurement of aerosol scattering as a function of relative humidity and black carbon mass, allowing more accurate closure studies to be performed.In this work we present a flexible framework for assessing parameterizations of optical properties and hygroscopic growth of aerosols.This framework is used to calculate the optical properties of atmospheric aerosols at a given relative humidity based on their composition and size distribution, which can then be compared with measured values of the same quantities.In our case, the FAAM BAe-146 aircraft provides measurements of the chemical composition, microphysical, optical and hygroscopic properties of the atmospheric aerosols, which allow us to explore here the agreement between models and measurements of the aerosol optical properties for two very different aerosol types.Section 2 of this paper describes the framework and the data from the FAAM BAe-146 aircraft used.Section 3 presents the closure study of the aerosol optical properties.Section 4 discusses the uncertainties associated to the calculated aerosol optical properties."The work's conclusions are presented in Section 5.We have developed a flexible framework to calculate the scattering and absorption by atmospheric aerosols at a given relative humidity based on the composition and size distribution.The framework can be used with different scattering codes and mixing states, but here we use Mie scattering for homogeneous internally mixed spheres.Although aerosols, and particularly black carbon, are not always spherical, this assumption is valid for well-mixed anthropogenic aerosols, especially in moderately humid environments, and is frequently used for most anthropogenic aerosol types.The way in which the different components are distributed within the aerosol particles is referred to as mixing state, which ranges from external to homogeneous internal mixture.An external mixing state is an appropriate assumption for freshly emitted aerosols, which have not had time to undergo chemical reaction or coalescence.An internal mixture is a better assumption for older, well-mixed aerosol.Well-mixed anthropogenic aerosols can usefully be modelled as having a homogeneous internal mixing state, while a core and shell model would be more appropriate if a large mass of black carbon was present.Although our framework includes the possibility of choosing between this whole range of mixing states, since the cases considered here are of well-mixed anthropogenic aerosols with none or small amounts of black carbon, we will focus on the homogeneous internal mixing case.The ambient size distribution is then calculated by applying this mixed growth factor to the dry size distribution.Next, the mass of water taken up by the aerosol is calculated by comparing the average volume of the dry aerosol with that of the ambient aerosol.By including this water as an additional chemical component, it is then possible to calculate the refractive index of the internally mixed aerosol at a given relative humidity, and for a variety of wavelengths, by applying the ZSR volume mixing rule.The resultant ambient size distribution and refractive index are then passed in this case to the Mie scattering code of Wiscombe in order to calculate the aerosol optical properties.Although other similar frameworks exist, including OPAC by Hess et al. which is still widely used to specify aerosol for use in satellite retrievals, this framework is much more flexible, allowing the use of composition and size distributions directly.In addition, since it is closer to the parameterisations used in climate models, it allows convenient and rapid testing of the impact of uncertainties in data, or new measurements on climate relevant aerosol properties.The refractive indices of major aerosol components such as ammonium sulphate, ammonium nitrate, black carbon and organic aerosol assumed by the framework are based on a literature review of field observations and laboratory studies.The refractive index for sulphate, which is a scattering aerosol with no absorption in the visible spectrum, is taken from Toon et al.However, the refractive index for nitrate, another scattering aerosol with no absorption in the visible spectrum, is not well characterized although it is an important contributor to light scattering in the atmosphere.In this framework, we use a single value with no absorption component from Weast below 0.7 μm, the values from Gosse et al. in the intermediate range and the values from Jarzembski et al. in the infrared.Due to technical issues in the measurement of the abundance and optical properties of black carbon, which is highly absorbing in the visible spectrum, there is considerable debate regarding the most appropriate value for its refractive index.We use here the more absorbing refractive indices from Bond and Bergstrom.The refractive index of organic aerosol is difficult to define because its properties vary according to source, location, combustion type and aerosol age.In this framework, we use a refractive index based on that of Swannee River Fulvic Acid at 532 nm, with the wavelength dependence of Kirchstetter et al. in the imaginary part between 350 and 700 nm, and being wavelength independent in the real part between 400 and 700 nm.For wavelengths above 4 μm, the wavelength dependence for the water soluble type from Hess et al. is used.SRFA was assumed since it can be considered to be representative of aged organic aerosol, which was found in the measurement campaigns considered here.The refractive indices at 550 nm assumed by the framework for these major aerosol components are specified in Table 1.The hygroscopic growth factors of major aerosol components such as ammonium nitrate, ammonium sulphate and black carbon have been well studied; that of organic aerosol is worse known.The hygroscopic growth factors of sulphate and nitrate depend strongly on the ambient relative humidity, and the values reported in the literature are either derived from or in agreement with the values reported in Tang.However, while the growth factors of ammonium sulphate and ammonium nitrate depend on the initial size of the aerosol, data relating growth factor and initial aerosol size are very limited.Black carbon is a hydrophobic aerosol, and it is generally accepted that its growth factor is approximately 1, and independent of the ambient relative humidity or the initial aerosol size.The growth factor of organic aerosol is a complex function of its component organic compounds, the combustion processes which produced them, chemical processing in the atmosphere and mixing with ambient aerosol.Studies of individual organic compounds, as well as various hygroscopic closure studies, have generally found a modest growth factor for organic aerosol, and it is not thought to depend on the initial aerosol size.The hygroscopic growth factors at a relative humidity of 80% assumed by the framework for these major aerosol components are specified in Table 1.The instrumentation aboard the FAAM BAe-146 aircraft measures the chemical composition, microphysical, optical and hygroscopic properties of the atmospheric aerosols, and it has been described in detail in Johnson et al., Osborne et al., McMeeking et al. and Morgan et al.In this study, we have used the data collected by the FAAM BAe-146 aircraft during the European Integrated Project on Aerosol Cloud Climate and Air Quality Interactions Long Range Experiment and the VAMOS Ocean-Cloud-Atmosphere-Land Regional Experiment.The EUCAARI-LONGREX campaign consisted of 15 flights over central Europe or off the UK coast during May 2008, and its meteorology and aerosol measurements have been fully discussed by McMeeking et al., Morgan et al., Hamburger et al. and Highwood et al.The VOCALS-REx campaign consisted of 10 flights over the South East Pacific region between October and November 2008, and the aerosol measurements made by the FAAM BAe-146 aircraft during this campaign have been described by Allen et al.Each flight for both campaigns consisted of a number of straight level runs at different altitudes with varying time durations.The motivation for using these two campaigns to explore the agreement between modelled and measured aerosol optical properties comes from the very different chemical composition of the atmospheric aerosols.Fig. 1 shows the mean mass concentration of the main aerosol components for each of the flights during the two campaigns.During EUCAARI-LONGREX, aerosols were mainly composed of nitrate, sulphate and organic matter, with small concentrations of black carbon.However, during VOCALS-REx, the aerosol composition was clearly dominated by sulphate, thus potentially representing a “simpler” aerosol system, although there are subtle differences between composition data reported by the various VOCALS-REx studies.Ammonium sulphate, found during EUCAARI-LONGREX, is assumed for VOCALS-REx, and the differences regarding the composition of the sulphate aerosol) show little impact on our results.Nitrate is not reported for VOCALS-REx since its concentrations were not registered above the detection limit of the AMS.Additionally, the instrument aboard the FAAM BAe-146 aircraft to measure black carbon was not operational during the VOCALS-REx campaign.Furthermore, additional motivation for using these two campaigns is the different ambient relative humidity conditions, which would be expected to also have a significant impact on the aerosol properties.The ambient relative humidity for the SLR used in this study during the EUCAARI-LONGREX campaign was in the range 29–87%, with an average value of 52%, while during the VOCALS-REx campaign it was in the range 70–92%, with an average value of 85%.Fig. 3 shows the comparison of the calculated values of the aerosol absorption at 550 nm with the measured ones for “dry” aerosol averaged for each SLR in every flight of EUCAARI-LONGREX.The comparison for VOCALS-REx is not reported because the absorption registered for all flights of the campaign was below the detection limit of the Particle Soot Absorption Photometer, in agreement with the fact that the aerosol composition during VOCALS-REx was clearly dominated by sulphate, which is a scattering aerosol with no absorption in the visible spectrum.The calculated absorption underestimates the measured values for most of the flights of EUCAARI-LONGREX, although the agreement between calculated and measured absorption is within the 50% uncertainty of the measurements).These results are in agreement with those obtained by Highwood et al. for dry aerosol during the EUCAARI-LONGREX campaign.The bias between the calculated and measured absorption could be due to the uncertainty in the choice of refractive indices, especially those for black carbon and organic aerosol, although we expect the effect of the refractive index of black carbon to be weak here due to the small concentrations of black carbon registered during EUCAARI-LONGREX.Fig. 4 shows the comparison of the calculated values of f at 550 nm with the measured ones averaged for each SLR in every flight of EUCAARI-LONGREX and VOCALS-REx.There is poor agreement for all flights of EUCAARI-LONGREX, with the calculated f clearly overestimating the measured values by ∼30%.There is slightly better agreement for all flights of VOCALS-REx, with the calculated f overestimating the measured values by ∼20%.However, due to the large uncertainty of the measured f of 60%, the agreement between calculated and measured values is well within the uncertainty of the measurements.The bias between the calculated and measured hygroscopic scattering growth factor would be due to the same factors causing the bias between the calculated and measured scattering, i.e., the uncertainty in the choice of hygroscopic growth factors, especially those for organic aerosol and sulphate, and in the aerosol size distribution.Although in Figs. 2–4 we have shown the uncertainty in the measurements but not in the calculations, uncertainty also exists in the calculated values mainly due to uncertainties in the refractive indices, the hygroscopic growth factors and the aerosol size distribution.Probably the uncertainty is higher for the EUCAARI-LONGREX campaign since it is chemically more complex than the VOCALS-REx campaign, the latter being dominated by sulphate which has been moderately well studied.In the following section we assess the sensitivity of the calculated scattering and absorption to these factors.To test the sensitivity of the calculated scattering and absorption to the choice of refractive indices, particularly those for black carbon and organic aerosol, we have repeated our calculations using different values of those refractive indices, and we have compared the new results with the old ones.There is still considerable debate regarding the most appropriate value for the refractive index of black carbon.Originally we used the “high absorbing” refractive index from Bond and Bergstrom, but we could have used the value from Hess et al. or the “medium absorbing” refractive index suggested by Stier et al.Only the flights for the EUCAARI-LONGREX campaign have been used in this test.The sensitivity of the calculated scattering and absorption to assumptions in refractive indices of black carbon and organic aerosol, hygroscopic growth factors of organic aerosol and sulphate and aerosol size distribution is shown in a box diagram in Fig. 5.The dividing segment in the box is the median.The bottom/top box limits represent the 1st and 3rd quartiles.The box bars represent the minimum and maximum.Absorption is more sensitive than scattering to the refractive index for black carbon, but the effect here is very weak due to the small concentrations of black carbon present during EUCAARI-LONGREX.The refractive index for organic aerosol has therefore much more impact, especially in the absorption.All flights for both the EUCAARI-LONGREX and VOCALS-REx campaigns have been used in this test.Removing all absorption by organic aerosol as suggested in some previous studies produces changes in the calculated absorption of ∼60% on average and up to ∼81%, while the changes in the calculated scattering are of ∼3% on average and up to ∼9%.Having some weak absorption by organic aerosol reduces the changes in the calculated absorption and scattering to ∼28% and ∼1.4% on average, respectively.These observed changes worsen the agreement between the measured and calculated scattering and absorption.Following Highwood et al., RI_OC and RI_OC in Fig. 5 refer to an assumption of no absorption and some weak absorption by organic carbon, respectively.The sensitivity to the choice of hygroscopic growth factors, particularly those for organic aerosol and sulphate, is tested here for the calculated scattering.Most studies report hygroscopic growth factors for organic aerosol in the range of 1–1.65, with a mean value of 1.20.We have repeated our scattering calculations for all flights of EUCAARI-LONGREX and VOCALS-REx using this mean value, which is independent of the ambient relative humidity, and we have compared them with the scattering calculated using the hygroscopic growth factor from Brooks et al.The observed change in the calculated scattering is of ∼15% on average and up to ∼37%, slightly improving the agreement between the measured and calculated scattering.The hygroscopic growth factors for sulphate depend on both the ambient relative humidity and the initial size of the aerosol.However, information on the latter is very limited.Originally we used the hygroscopic growth factors for sulphate from Tang, who reported values in the range 1.20–1.75 with dependence on the ambient relative humidity, but not on the initial aerosol size.Topping et al. reported values in the range 1.66–1.73 depending on the initial aerosol size for a fixed ambient relative humidity of 90%, which involve a variation of approximately ±4% in the hygroscopic growth factor for sulphate.We have repeated our scattering calculations for all flights of VOCALS-REx, which were clearly dominated by sulphate, using this variation of ±4% on the hygroscopic growth factor from Tang, and we have compared them with the original calculations.The sensitivity of the calculated scattering to the hygroscopic growth factor of sulphate is small, of only ∼8.5% on average and up to ∼9.4%, although we would expect the hygroscopic growth factor of sulphate to be much more sensitive to the ambient relative humidity than to the initial aerosol size.To test the sensitivity of the calculated scattering and absorption to the aerosol size distribution, we have repeated our calculations for EUCAARI-LONGREX using flight-mean size distributions instead of the measured ones for each SLR of each flight of the campaign, and we have compared both sets of results.The change in the calculated scattering and absorption is significant, ∼35% on average for both the scattering and the absorption and up to 134% and 122% for the scattering and the absorption, respectively.Thus the use of a campaign mean as for VOCALS is likely to be a significant hindrance to obtaining a meaningful model of aerosol optical properties.Climate models often require aerosol optical properties to be prescribed.We have presented a flexible framework for calculating aerosol optical properties from commonly made measurements of aerosol composition and size distribution.For two different aerosol types, a complex multicomponent aerosol dominated by organic aerosol and ammonium nitrate and a simpler sulphate dominated aerosol, we have assessed the degree to which we can achieve closure with measured aerosol optical properties.We have also identified and quantified the largest sensitivities of the optical properties calculated in this way.Our framework can replicate ambient scattering to within the measurement uncertainty for the complex EUCAARI-LONGREX aerosol, although the agreement is less good for the simpler VOCALS-REx aerosol.However, we do not have access to detailed size distributions for individual SLR from VOCALS-REx and our sensitivity tests show that size distribution is a large source of uncertainty in scattering.The second largest source of uncertainty in scattering is the growth factor assumed for organic aerosol.Our framework can also replicate dry absorption to within measurement uncertainty for EUCAARI-LONGREX, however no closure study was possible during VOCALS-REx because of the unreliable absorption measurements below the detection limit of the measurement instrument.For absorption, the refractive index of the organic aerosol is the predominant source of uncertainty.The hygroscopic scattering growth factors, f, predicted by the framework seem at odds with the relative agreement in scattering.The calculated hygroscopic scattering growth factors seem much larger than the measured values.The reason for this is unclear, but measurement uncertainty in f is large.Our results indicate that improvements in the accuracy of the aerosol radiative impact would come from better representation of aerosol size distributions and measurements of growth factors at a variety of sizes and relative humidities for organic aerosol.Measurements of hygroscopicity of real atmospheric aerosol alongside optical properties and refractive indices measurements would be a significant advance.
We present a flexible framework to calculate the optical properties of atmospheric aerosols at a given relative humidity based on their composition and size distribution. The similarity of this framework to climate model parameterisations allows rapid and extensive sensitivity tests of the impact of uncertainties in data or of new measurements on climate relevant aerosol properties. The data collected by the FAAM BAe-146 aircraft during the EUCAARI-LONGREX and VOCALS-REx campaigns have been used in a closure study to analyse the agreement between calculated and measured aerosol optical properties for two very different aerosol types. The agreement achieved for the EUCAARI-LONGREX flights is within the measurement uncertainties for both scattering and absorption. However, there is poor agreement between the calculated and the measured scattering for the VOCALS-REx flights. The high concentration of sulphate, which is a scattering aerosol with no absorption in the visible spectrum, made the absorption measurements during VOCALS-REx unreliable, and thus no closure study was possible for the absorption. The calculated hygroscopic scattering growth factor overestimates the measured values during EUCAARI-LONGREX and VOCALS-REx by ~30% and ~20%, respectively. We have also tested the sensitivity of the calculated aerosol optical properties to the uncertainties in the refractive indices, the hygroscopic growth factors and the aerosol size distribution. The largest source of uncertainty in the calculated scattering is the aerosol size distribution (~35%), followed by the assumed hygroscopic growth factor for organic aerosol (~15%), while the predominant source of uncertainty in the calculated absorption is the refractive index of organic aerosol (28-60%), although we would expect the refractive index of black carbon to be important for aerosol with a higher black carbon fraction. © 2014 Elsevier Ltd.
433
The Temporal Dynamics of Arc Expression Regulate Cognitive Flexibility
The activity-regulated protein Arc/Arg3.1 is essential for spatial memory acquisition and consolidation.Arc is required for protein-synthesis-dependent synaptic plasticity related to learning and memory, making it one of the key molecular players in cognition.Arc protein expression is highly dynamic: increasing and then rapidly declining following increased network activity or exposure to a novel environment.Retrieval of a memory also induces Arc expression which then rapidly decays.The regulation of Arc protein induction occurs at the level of mRNA transcription, mRNA trafficking, and protein translation.Although the importance of Arc induction is clear, the role of Arc protein degradation in synaptic plasticity and learning-related behaviors is still unknown.To determine the importance of Arc removal, we generated a mutant mouse line in which ubiquitin-dependent degradation of Arc is disabled.We show that ArcKR mice display impaired cognitive flexibility that is coupled with elevated levels of Arc protein expression, a reduced threshold to induce mGluR-LTD, and enhanced mGluR-LTD.We further show that behavioral training alters Arc mRNA expression and modulates the magnitude of mGluR-LTD.Arc promotes AMPA receptor endocytosis following activation of group I mGluRs with the agonist DHPG.This effect is reduced by overexpression of Triad3A/RNF216, which targets Arc for degradation.Conversely, Triad3A/RNF216 depletion increases Arc levels, thus enhancing AMPAR endocytosis.We recorded mEPSCs in neurons expressing short hairpin RNA directed against Triad3A/RNF216.Depletion of Triad3A/RNF216 significantly enhanced DHPG-dependent reduction in AMPAR-mediated mEPSC amplitude within 2–3 min compared to neurons expressing a scrambled shRNA.Triad3A/RNF216 and Ube3A E3 ligases ubiquitinate Arc on lysines 268 and 269, targeting Arc for proteasome-mediated degradation.To confirm that ubiquitination of Arc modulates surface AMPAR expression, we transfected hippocampal neurons with either Arc-WT, Arc-2KR, or Arc-5KR and then stained for the surface AMPAR subunit, GluA1.Overexpression of the Arc-KR mutants had comparable effects, both producing a greater decrease in surface GluA1 expression compared to Arc-WT.Thus, a reduction in Arc protein degradation enhances GluA1 internalization and suggests that expression of a degradation-resistant Arc protein would augment AMPAR endocytosis in vivo.We next created an Arc knockin mouse where lysine 268 and 269 were mutated to arginine to prevent Arc ubiquitination.ArcKR mice were born with expected Mendelian ratios, with no differences in mortality rate or in weight of heterozygous or homozygous ArcKR/KR mice compared to Arc+/+ littermates.There were no significant differences in the expression of various scaffold proteins, NMDA receptors, or AMPAR subunits in synaptosomes.Expression levels of mGluR1/5 and Arc protein were similar in WT and ArcKR mice as was Arc mRNA.To confirm that proteasome-mediated turnover of Arc was impaired, we monitored Arc protein levels following addition of DHPG, which induces Arc translation and ubiquitination.In WT neurons, addition of DHPG increased Arc protein, peaking at 120 min post-induction and decaying near baseline levels at 480 min.Addition of DHPG to ArcKR hippocampal neurons resulted in persistent Arc protein elevation.To measure Arc degradation, we applied the protein synthesis inhibitor anisomycin after DHPG to halt Arc protein synthesis.In WT neurons, Arc levels were reduced after anisomycin treatment, consistent with rapid Arc degradation.In contrast, this decline was significantly blunted in ArcKR neurons, demonstrating the importance of proteasomal degradation in limiting the half-life of Arc protein.We have demonstrated that Arc ubiquitination via K48 linkages could be elicited by pilocarpine-induced seizures in vivo.Pilocarpine-induced seizures in WT mice resulted in an increase in K48-linked Arc ubiquitination, an effect that was attenuated in ArcKR mice.Consistent with the reduced surface expression of GluA1, we observed increased GluA1 endocytosis in ArcKR neurons treated with DHPG using a high-content AMPA receptor trafficking assay.Intriguingly, we found a significant increase in the surface expression of the GluA2 subunit in ArcKR neurons, at short time points, indicating a potential receptor subunit replacement.These findings support our previous observation that overexpression of ArcWT in hippocampal neurons increases the rectification index of AMPAR-mediated miniature-EPSC amplitude, indicating an increase in the proportion of AMPAR-containing GluA2 subunits.Given the increase in GluA1-containing AMPAR endocytosis rates, we speculated that mGluR-LTD would be enhanced, as this form of plasticity requires Arc-dependent AMPAR internalization.Basal synaptic transmission and synaptic plasticity were measured in hippocampal slices from ArcKR and WT littermates.ArcKR mice had unaltered basal synaptic transmission: no significant change in paired-pulse facilitation, input-output relationship, or ratio of fEPSP slope to volley amplitude.Thus, under basal conditions, Arc ubiquitination has little effect on synaptic AMPAR expression consistent with no changes in protein and mRNA expression.To investigate the role of Arc degradation in synaptic plasticity, we induced mGluR-LTD with DHPG in hippocampal slices from ArcKR and WT littermates.We observed no significant difference between genotypes when DHPG was present.However, LTD was significantly enhanced in ArcKR mice.To test whether the threshold to induce LTD is reduced in ArcKR slices, we applied a lower concentration of DHPG.This lower concentration of DHPG was insufficient to induce LTD in WT mice but was sufficient to induce LTD in slices from ArcKR mice.Thus, reduction of Arc ubiquitination reduces the threshold to induce LTD and enhances the magnitude of mGluR-LTD.We considered whether ArcKR mice displayed behavioral deficits.WT and ArcKR mice had no overt motor abnormalities, similar anxiety levels and recognition memory.We next explored the role of Arc ubiquitination in hippocampal-dependent spatial learning by using a modified Barnes maze.Mice were tested for 21 consecutive days to test acquisition, consolidation, and expression phases of learning.On day 16, the platform was rotated 180°, requiring the mice to learn a new location for the exit hole.No differences were observed in spatial acquisition during days 1–15.Following 180° rotation of the exit hole on day 16, there was no significant difference in distance traveled, but there was a significantly higher number of errors and selective perseverance bias in ArcKR mice during the reversal phase.However, there was no difference in quadrant bias ratios during the acquisition phase.This suggests that ArcKR mice show impairments in performing the task specifically during reversal learning.We next examined the strategy used to search for the maze exit hole.Both WT and ArcKR mice showed a similar shift from a combination of random and serial strategies to a spatial search strategy.However, when the maze exit was rotated 180°, there were clear differences in the search strategy employed by WT and ArcKR mice.On the day of reversal, WT and ArcKR mice used similar search strategies to the first day of training.However, in subsequent days, WT mice replaced random and serial for a spatial search strategy.In contrast, ArcKR mice only employed a serial search strategy on days 17–19, before utilizing a combination of strategies on days 20 and 21.Taken together, ArcKR mice are unable to reuse strategic approaches previously acquired during task learning, suggesting cognitive inflexibility.To address whether training of a spatial-dependent task had an impact on Arc expression and subsequent hippocampal synaptic plasticity, we used hippocampi of trained WT and ArcKR littermates to either measure Arc mRNA and protein expression or to measure mGluR-LTD.Compared to WT, we observed a significantly larger decrease in the fEPSP slope during DHPG application in slices from trained ArcKR mice, and the expression of LTD was significantly increased in ArcKR slices.Consistent with the enhanced decrease in the amplitude of fEPSPs during DHPG application, the levels of Arc protein were also significantly increased in hippocampal lysates obtained from ArcKR, but not from WT mice.Barnes maze training of WT and ArcKR mice for 15 days resulted in a significant reduction in Arc mRNA with no change in Grm5 mRNA levels compared to the first day of training.This expression was not different between WT and ArcKR mice, indicating that Arc mRNA dynamics are not altered during spatial learning as has been observed in other Arc transgenic mouse models.Intriguingly, we found that reversal learning led to an increase in Arc mRNA in ArcKR mice, suggesting delayed dynamics in Arc mRNA induction that mirrored the learning deficits in ArcKR mice during reversal learning.To determine the effect that Barnes maze training on basal synaptic transmission and mGluR-LTD, we recorded interleaved slices from trained and naive WT and ArcKR littermates.Training had no effect on basal synaptic transmission in either WT or ArcKR mice.However, training significantly reduced mGluR-LTD in both trained genotypes.This reduction was not presynaptic.In ArcKR-trained slices, there was a decrease in fEPSP amplitude during DHPG application, an observation that was not seen in WT mice.These findings suggest that the temporal dynamics of Arc protein expression, induced by behavioral training, modulates subsequent mGluR-LTD.Arc protein expression is exquisitely regulated by neuronal activity.Here we have determined the functional consequences of modifying the temporal profile of Arc expression.Using a novel mouse we showed: mGluR1/5-dependent induction of Arc protein enhances GluA1-containing AMPAR endocytosis; enhanced mGluR-LTD and a reduced threshold to induce LTD; deficits in selecting strategies to perform the reversal of a spatial learned task that is coupled to the changes in LTD; and increased Arc mRNA expression is associated with reversal learning.Previous studies have linked alterations of mGluR-LTD to changes in spatial learning and task reversal.Since deficits in mGluR-LTD are associated with impairment of the acquisition/consolidation of spatial learning and poor performance with task reversal, it might be predicted that ArcKR mice, which show enhanced mGluR-LTD, would exhibit improved spatial learning.However, this was not the case, with ArcKR mice showing specific defects in reversal learning strategies.Instead of using a combination of random, serial, and spatial strategies, as observed in WT, ArcKR mice relied primarily on a serial strategy resulting in more errors during task reversal.These mice are unable to engage multiple strategic approaches and thus lack the cognitive flexibility required to efficiently complete the task.How does the enhancement of LTD lead to specific cognitive deficits?,During reversal learning, memories of the previously learned task are updated as new memories are acquired.Neural representation of such memory updating requires a precise balance between synaptic depression and potentiation.If the amplitude of the depression is too high or occurs too early, this might delay or prevent the acquisition of new memories.Conversely, if LTD is impaired, this could prevent the updating of memories acquired during the acquisition phase, interfering with task reversal.Intriguingly, defects in these forms of plasticity are observed in neurological disease models, which are associated with elevated levels of Arc,.Thus, an optimal balance of protein translation, synthesis, and turnover is required for the correct expression of mGluR-LTD and learning behavior.The degree of inhibition produced by low concentrations of DHPG was the same in both genotypes, suggesting similar activation of mGluRs and downstream pathways, but LTD was only produced in ArcKR slices.It seems likely that Arc is degraded in WT mice and does not reach a sufficient concentration to induce LTD, whereas in ArcKR mice, Arc persists, and thus induces LTD.The amplitude of mGluR-LTD is enhanced in ArcKR mice and has a postsynaptic origin, as there were no changes in paired-pulse facilitation.This is further supported by an increased internalization of the GluA1-containing AMPAR subunit in ArcKR neurons after DHPG exposure.Corroborating this hypothesis is the observation that there is a significant reduction in the rectification index of AMPAR-mediated mEPSC amplitude in neurons overexpressing Arc, indicating a reduction in the number of GluA1-containing AMPAR subunits at synapses.There were no changes in Arc protein or Arc mRNA levels between genotypes under basal conditions, consistent with the lack of differences in basal synaptic transmission.Similar to previous reports, we found that prolonged behavioral training reduced Arc mRNA levels, although there were no changes in Arc protein expression.This may reflect a fall in transcription rate, a loss of mRNA, or the slow degradation of protein.Following completion of the reversal task, there was a significant increase in Arc mRNA and Arc protein in ArcKR mice.Interestingly, the amplitude of LTD was reduced in both WT and ArcKR mice following behavioral training when compared to naive littermates.This did not appear to be a consequence of changes in Grm5 mRNA expression.The mechanism for this reduction in LTD is unclear but is likely postsynaptic, as no changes in paired pulse ratios were observed.A possible explanation is that the increased expression of Arc, induced during the reversal task, partially occludes subsequent depression through a feedback mechanism that reduces induction of Arc or altered signaling pathways downstream of Arc.Although Arc ubiquitination was attenuated in ArcKR mice, there was no accumulation of Arc protein in vivo.This suggests that this Arc ubiquitination pathway is not utilized as frequently during early stages of development.Alternatively, additional pathways might ubiquitinate Arc earlier in life and during specific phases of learning.Indeed, following stimulation of N-methyl-D-aspartate receptors, another unknown E3 ubiquitin ligase has been proposed to ubiquitinate Arc at an alternative lysine site, K136, leading to Arc degradation by the ubiquitin proteasome pathway.However, this does not appear to be involved in AMPAR endocytosis as mutation of this site in addition to K268 and K269 did not further alter AMPAR internalization.An alternative interpretation is that the lysosome-dependent pathway regulates Arc degradation.Recent findings suggest that Arc protein and mRNA undergo self-intercellular transfer by assembling into virus-like capsids.This mechanism may require exosome secretion from one neuron and endocytic uptake by another.A possible explanation for the moderate accumulation of Arc protein in ArcKR mice is that Arc ubiquitination participates in this transfer process.Evidence to support this hypothesis is highlighted by our recent findings demonstrating that the E3 ubiquitin ligase for Arc, Triad3A/RNF216 is enriched at clathrin-coated pits, regions that participate in the endocytosis of cargo molecules.Recently, a point mutation in Arc was shown to enhance binding to Triad3A but decrease interactions with AP-2 and dynamin.These findings suggest another functional role for Arc/Triad3A interaction.Further evidence supporting these findings is that expression of the ArcKR mutant stays longer at the plasma membrane but rarely overlaps with CCPs, suggesting that Triad3A-dependent ubiquitination might couple Arc to endocytic regions.Our results reveal that disruption in the degradation of a single protein, Arc, is sufficient to enhance mGluR-LTD resulting in deficits in reversal learning strategy.Thus, manipulation of Arc longevity may be a strategy to restore synaptic plasticity defects in neurological disorders where Arc protein dynamics are disrupted.As Lead Contact, Angela M. Mabb is responsible for all reagent and resource requests.Please contact Angela M. Mabb at [email protected] with requests and inquiries.Mice were kept in standard housing with littermates, provided with food and water ad libitum and maintained on a 12:12 cycle.All behavioral tests were conducted in accordance with the National Institutes of Health Guidelines for the Use of Animals.All behavioral studies with the exception of the Barnes Maze test were conducted using approved protocols at the University of North Carolina, Chapel Hill.The Barnes Maze test and the hippocampal slice experiments were performed at the University of Warwick.The mice were treated in accordance with the Animal Welfare and Ethics Committee and experiments were performed under the appropriated project licenses with local and national ethical approval.Samples sizes for behavioral and slice experiments were calculated using variance from previous experiments to indicate power, which statistical analysis significance was set at 95%.Primary neuron culture, pilocarpine seizure experiments, and isolation of brain tissue for biochemical experiments were approved by the Georgia State University Institutional Animal Care and Use Committee.Hippocampal slices were obtained from 21 to 35 day-old WT and ArcKR littermates.For trained mice, slices were obtained up to 1 hr after the last training session.Animals were sacrificed by cervical dislocation and decapitated in accordance with the UK Animals Act.The brain was rapidly removed and placed in ice-cold high Mg2+, low Ca2+ artificial CSF, consisting of the following: 127 NaCl, 1.9 KCl, 8 MgCl2, 0.5 CaCl2, 1.2 KH2PO4, 26 NaHCO3, 10 D-glucose.Parasagittal brain slices were then prepared using a Microm HM 650V microslicer in ice-cold aCSF.Slices were trimmed and the CA3 region was removed.Slices were allowed to recover at 34°C for 3-6 hr in aCSF before use.Field excitatory postsynaptic potentials were recorded from interleaved slices from WT and ArcKR littermates.An individual slice was transferred to the recording chamber, submerged in aCSF, maintained at 32°C, and perfused at a rate of 6 mL/min.The slice was placed on a grid allowing perfusion above and below the tissue and all tubing was gas tight.To record fEPSPs, an aCSF filled microelectrode was placed on the surface of stratum radiatum in the CA1 region.A bipolar concentric stimulating electrode controlled by an isolated pulse stimulator, model 2100 was used to evoke fEPSPs at the Schaffer collateral–commissural pathway.All recordings were made in the presence of 50 μM picrotoxin to block GABAA receptors and the NMDA receptor antagonist L-689,560.Field EPSPs were evoked at 0.1 Hz, with a 20-min baseline recorded at a stimulus intensity that gave 40% of the maximal response.To induce mGluR-LTD, 50 or 100 μM of-3,5-DHPG was applied for 10 min and then washed off for at least one hour as previously described.Recordings of fEPSPs were made using a differential model 3000 amplifier with signals filtered at 3 kHz and digitized online with a Micro CED interface controlled by Spike software, Cambridge Electronic Design, Cambridge UK).Field EPSPs were analyzed using Spike software and graphs prepared using Origin, with the slope of fEPSPs measured for a 1 ms linear region following the fiber volley.Statistical analyses applied were the post hoc Student’s t test or repeated-measures ANOVA with pairwise multiple comparisons.Behavioral data were analyzed using two-way or repeated-measures Analysis of Variance.Fisher’s protected least-significant difference tests were used for comparing group means only when a significant F value was determined.For all comparisons, significance was set at p ≤ 0.05.Data presented in figures and tables are means.Arc knockin mice were produced by the ingenious targeting laboratory.Gene targeting was performed in iTL IC1 ES cells to introduce 2 point mutations within Exon 1 of the Arc gene.When encoded, the introduction of these point mutations leads to a substitution of Lysine to Arginine in positions 268 and 269, sites that were previously identified as being ubiquitinated by Triad3A and Ube3a.ES cells were screened and positive clones were microinjected into BALB/c blastocysts and transferred to pseudopregnant female mice.Resulting chimeras with a high percentage black coat color were mated to C57BL/6 FLP mice to remove the Neo cassette, and backcrossed five times to C57/BL6 mice.Arc+/+ were distinguished from ArcKR/KR homozygous mice by genotyping for the presence of the one remaining FRT site after Neo deletion using the following primer sets: ARC-KI-NDEL1: 5′-cttattggagtatgtgccatttctc-3′ and ARC-KI-NDEL2: 5′-cattgaccctgtctccagattc-3′ where the wild-type band size is 291 base pairs and the knockin band size is 355 base pairs.Primary hippocampal neurons of mixed sex were isolated from P0-1 mice as previously described.To assess Arc levels following mGluR-LTD, DIV12 - 14 primary hippocampal neurons were pre-treated with 2 μM TTX for 4 hr.TTX was washed out and 100 μm-3,5-dihydroxyphenylglycine was applied for a total of 10 min.Cells were harvested 45 min later following DHPG washout.To block protein synthesis of Arc, 20 μM anisomycin was added at the 45 min time point and cells were harvested 30, 60, and 120 min later.The primary cortical neuron culture protocol was based on.Cortices of mixed sex were dissected from E18 rat embryos.Cortices were dissociated in DNase and papain, then triturated with a fire-polished glass pipette to obtain a single-cell suspension.Cells were pelleted at 1000xg for 4 min, the supernatant removed, and cells resuspended and counted with a TC-20 cell counter.Neurons were plated on glass coverslips coated with poly-l-lysine in 12-well plates at 100,000 cells/mL.Neurons were initially plated in Neurobasal media containing 5% horse serum, 2% GlutaMAX, 2% B-27, and 1% penicillin/streptomycin in a 37°C incubator with 5% CO2.On DIV4, neurons were fed via half media exchange with Neurobasal media containing 1% horse serum, GlutaMAX, and penicillin/streptomycin, 2% B-27, and 5 μM cytosine β-d-arabinofuranoside.Neurons were fed every three days thereafter.At DIV14, transfections were performed using Lipofectamine 2000 as described previously.Immunostaining was performed 16 hr later.At DIV15, transfected neurons were live-labeled for surface GluA1 receptors.Neurons were washed twice at 10°C with 4% sucrose/1X phosphate-buffered saline, then incubated in anti-GluA1-NT diluted in MEM containing 2% GlutaMAX, 2% B-27, 15 mM HEPES, 1 mM sodium pyruvate, and 33 mM glucose at 10°C for 20 min.Neurons were then fixed for 15 min with 4% sucrose/4% paraformaldehyde in 1X PBS, then incubated in Alexa Fluor 555 to label only surface GluA1.Following this, neurons were permeabilized for 10 min with 0.2% Triton X-100 in 1X PBS, and blocked for 30 min in 5% normal donkey serum in 1X PBS.Neurons were then incubated with rabbit anti-Arc antibody, diluted in block for 1 hr at RT, washed 3 × 5 min in 1X PBS, and incubated in secondary antibody diluted in block for 1 hr at RT.Neurons on coverslips were mounted on glass slides in Fluoromount and dried overnight at RT.A total of 15 transfected and untransfected neurons were imaged at 60X on an Olympus FV1000 confocal microscope.GluA1 immunostaining was analyzed using ImageJ software.The most intense immunostaining was used to set an arbitrary pixel intensity threshold, which was applied to every image in the experiment.Integrated density of each puncta in two 25-μm dendrite segments/neuron was measured and summed to obtain a total integrated density of the puncta on the dendritic segment.Primary hippocampal neurons were prepared as stated above and plated in 96-well microplates at a density of 20,000 cells per well.Neurons were fed as previously described.At DIV 14, neurons were treated with 2 μM TTX for 4 hr.Following treatment, neurons were cooled to room temperature and incubated with a 1:150 dilution of anti-GluA1 or anti-GluA2 antibody prepared in conditioned media.Neurons were incubated for 20 min at room temperature to allow antibody binding.Samples were washed 3 times with room temperature Neurobasal medium and then treated with vehicle or 100 μm DHPG for 10 min.After 10 min, DHPG was washed out and neurons were fixed with 4% paraformaldehyde/4% sucrose in PBS for 20 min at 4°C.Neurons were then washed with 1X DPBS and blocked in Odyssey Blocking Buffer for 90 min at room temperature.To measure surface receptors, neurons were incubated for 1 hr in a 1:1,500 dilution of IRDye 680RD Goat anti-Mouse IgG secondary antibody.Neurons were then washed with TBS 5 times and then fixed with 4% paraformaldehyde/4% sucrose in PBS for 20 min at 4°C.Neurons were then washed with TBS 2 more times and permeabilized in TBS containing 0.2% saponin for 15 min at room temperature.Neurons were then blocked in Odyssey Blocking Buffer for 90 min at room temperature.To label the internalized pool of receptors, neurons were incubated for 1 hr in a 1:1,500 dilution of IRDye 800CW Donkey anti-Mouse IgG secondary antibody.Cells were then washed with TBS 5 times and imaged on the Odyssey Clx Imaging System with a resolution of 84 μm, medium quality and a 3 mm focus offset.Images were processed in FIJI where ROIs were drawn on each well.The integrated density was measured on ROIs superimposed on the 680 and 800 channels individually.Experiments were run in triplicate and integrated density values for each channel in individual experimental wells were subtracted to a secondary only control.To calculate changes in surface receptor expression, the following calculation was used: RS/RT, where RS represents the integrated density of surface receptors and RT represents the integrated density of surface receptors + integrated density of internal receptors.Synaptosomes were prepared as previously described.Briefly, hippocampi were dissected from male and female P21 WT and ArcKR mice.Hippocampi were lysed in 10 volumes of HEPES-buffered sucrose and homogenized using a motor driven glass-teflon homogenizer at ∼900 rpm.The homogenate was centrifuged at 800-1000 x g at 4°C to remove the pelleted nuclear fraction.The resultant supernatant was spun at 10,000 x g for 15 min to yield the crude synaptosomal pellet.The pellet was resuspended in 10 volumes of HEPES-buffered sucrose and then respun at 10,000 x g for another 15 min to yield the crude synaptosomal fraction.The resulting pellet was lysed by hypoosmotic shock in 9 volumes ice cold H20 plus protease/phosphatase inhibitors and three strokes of a glass-teflon homogenizer and rapidly adjusted to 4 mM HEPES using 1 M HEPES, pH 7.4 stock solution.Samples were mixed constantly at 4°C for 30 min to ensure complete lysis.The lysate was centrifuged at 25,000 x g for 20 min to yield a supernatant and a pellet.The pellet was resuspended in HEPES-buffered sucrose and used for western analysis.Hippocampal lysate obtained from WT and ArcKR mice was prepared as previously described.Western blots were performed as previously described.Membranes were probed with the following antibodies: rabbit anti-Arc, goat anti-NR1, goat anti-NR2B, mouse anti-GluA1, mouse anti-GluA2, rabbit anti-mGluR1A, rabbit anti-mGluR5, rabbit anti-PSD-95, mouse anti-β-Actin, mouse anti-GAPDH.The following secondary antibodies were used: IRDye 680RD Goat anti-Mouse IgG, IRDye 800CW Goat anti-Rabbit IgG, IRDye 800CW Donkey anti-Goat IgG, Goat anti-Rabbit IgG-HRP H+L and Goat anti-Mouse IgG HRP LC.Blots were imaged using the Odyssey Clx Imaging System or the ChemiDoc MP Imaging System.Pilocarpine seizures were induced in postnatal day 60–70 male and female WT and ArcKR mice as previously described.Hippocampi from mice were harvested 30 min after the presence of Class III seizure onset.Arc protein ubiquitination was measured as previously described.Hippocampi from trained and naive WT and ArcKR mice were collected, submerged in RNAlater, and stored at −20°C until processed.The tissue was transferred into TRIzol reagent, disrupted using sterile pestles and homogenized by passage through a QIAshredder column.The homogenization was followed by chloroform phase separation and purification of the total RNA using the RNeasy Lipid Tissue Mini Kit.Purified RNA was subjected to on-column DNase treatment and the concentration and purity of the RNA was assessed spectrophotometrically using the NanoDrop ND-1000.RNA used had an A260/A280 ratio of 1.9–2.25.First-strand cDNA synthesis was performed using the Transcriptor First Strand cDNA Synthesis Kit using an anchored oligo18 primer, according to the manufacturer’s protocol.Primers were designed with the help of Primer3Plus software.qPCR was performed using a StepOnePlus Real-Time PCR System.Each reaction comprised of 2 μL of diluted cDNA, 5 μL PowerUp SYBR Green Master Mix and 500 nM primers in a final volume of 10 μL.The PCR cycling conditions were as follows: 50°C for 2 min, 95°C for 2 min, then 40 cycles of 95°C for 15 s and 60°C for 1 min.Cycling was followed by melt curve recording between 60°C and 95°C.Primer standard curves were performed to estimate the PCR efficiencies for each primer pair.Cycle threshold values were determined by the StepOne Plus software and adjusted manually.All qPCR reactions were run in duplicate or triplicate.A mean Ct value was calculated for each primer pair and each experimental condition.Relative quantification of Arc and Grm5 mRNA was performed using the 2-ΔΔCt method.Data were normalized to the geometric mean of GAPDH and/or GPI and presented as expression relative to a standard condition as indicated in the figure legends.Primer sequences are as follows: Arc-L tgttgaccgaagtgtccaag; Arc-R aagttgttctccagcttgcc; mGluR5-L cagtccgtgagcagtatgg; mGluR5-R gcccaatgactcccacta; GAPDH-L ggcaaattcaacggcacagt; GAPDH-R gggtctcgctcctggaagat; mGPI-L agctgcgcgaactttttgag; mGPI-R tatgcccatggttggtgttg.For behavior experiments, 12 WT and 12 ArcKR mice were used.Mice were sex balanced and housed separately by sex in groups of 4.No deaths occurred throughout the course of all behavioral studies.All behavioral tests were performed with the experimenter blinded to genotype.8-week-old mice were tested for motor coordination and learning on an accelerating rotarod.For the first test session, mice were given three trials, with 45 s between each trial.Two additional trials were given 48 hr later.Rpm was set at an initial value of 3, with a progressive increase to a maximum of 30 rpm across 5 min.The latency to fall from the top of the rotating barrel was recorded.Exploratory activity in a novel environment was assessed in 8-week-old mice by a one-hour trial in an open field chamber crossed by a grid of photobeams.Counts were taken of the number of photobeams broken during the trial in five-min intervals, with separate measures for locomotion and rearing movements.Time spent in the center region of the open field was measured as an index of anxiety-like behavior.The elevated plus maze was used to assess anxiety–like behavior, based on a natural tendency of mice to actively explore a new environment, versus a fear of being in an open area.Mice were given one five-min trial on the plus maze, which had two walled arms and two open arms.The maze was elevated 50 cm from the floor, and the arms were 30 cm long.Mice were placed on the center section and allowed to freely explore the maze.Measures were taken of time spent in, and number of entries into, the open and closed arms of the maze.To measure anxiety-like behaviors, 11-week-old mice were placed in a Plexiglas cage located in a sound-attenuating chamber with ceiling light and fan.The cage contained 5 cm of corncob bedding, with 20 black glass marbles arranged in an equidistant 5 X 4 grid on top of the bedding.Subjects were given access to the marbles for 30 min.The number of marbles buried was measured.Mice were habituated in a Plexiglas cage containing 2 cm of corncob bedding for 30 min.24 hr later, two of the same objects were placed in the same habituated cage containing 2 cm of corncob bedding.Mice were allowed to explore both objects for a total of 30 min.24 hr later, one of the objects was replaced with a novel object and mice were allowed to explore both the familiar and novel object for a total of 30 min.Measurements of time spent with each object were scored during min 2 through 12 of the acquisition and trial phase.The Novel Object Recognition Index was calculated as the).Object interactions were defined as active interaction with the object where the mouse’s nose was at least 1 cm pointed toward the object, time spent interacting/touching, and active sniffing of the object.Rearing on the objects was not scored.Exclusion criteria was set for <30 s interaction with both objects during the acquisition phase.One WT and two ArcKR mice did not meet the criteria and were excluded from the analysis.WT and ArcKR male mice aged 21-25 days were tested.Out of these 5 WT and 6 ArcKR mice were trained for 1 day, 5 WT and 7 ArcKR mice were trained for 15 days and 19 WT and 14 ArcKR were trained for 20-21 days.Spatial learning was assessed using a modified circular Barnes maze that measured 1 m in diameter, was situated 1 m from the floor, and contained 20 5-cm holes that were evenly spaced around the perimeter.The maze was positioned centrally within the lab, with surrounding equipment and architectural features kept in fixed positions, to act as spatial cues for learning.The maze contained an ‘exit’ box positioned under one of the holes and a “fake” box under another.The exit hole was randomly assigned on the first day and maintained in this position for 15 days for the “exit” box prior to the 180° shift and for the remaining 5-6 days of training.Each day, mice were randomly placed in the center of the maze, released, and allowed to explore the maze.The task was completed when the mouse entered the exit box.All runs were recorded using a camera system attached to a computer for offline analysis.Total distance, speed, and accuracy of task performance were measured.On days 1-5, the exit box contained flavored treats as a reward for task completion.On days 6-21, treats were awarded in the home cage on completion to prevent cued orientation of the exit box location via olfactory stimulation.On days 16-21, the position of the exit box was rotated 180° to determine the spatial component of and coping ability for the task.Data analysis was carried out in a blind fashion, independently from the experimenter.Error number was measured by calculating the number of incorrect holes visited before locating the correct “exit” hole.Mapping the progression of the animals around the maze allowed determination of the search strategy.These were: random: no consistent pattern, >2 crossings of the open field; serial: a hole-by-hole progression with ≥3 consecutive holes visited; and spatial: moving directly to the exit hole ± 2 holes and no deviation outside of the quadrant.Hippocampal neuronal cultures were prepared from postnatal day 0 pups from C57BL/6 wild-type mice as previously described.Briefly, hippocampi were dissected from the brain, dissociated with trypsin, and approximately 105 cells were plated onto 22-mm glass coverslips coated with poly-L-lysine in Neurobasal medium containing 1% L-glutamine, 1% penicillin-streptomycin, 2% B27 supplement and 5% horse serum.24 hr after plating the media was completely changed and cells were grown in serum-free media.Cultures were maintained at 37°C and 5% CO2 in a humidified incubator and transfections of either scrambled or Triad3-shRNA were performed using Lipofectamine 2000.Cells expressing scrambled or Triad3-shRNA constructs were recorded at least 3-5 days after transfections.Coverslips were transferred to the recording chamber and perfused at a constant flow rate of with recording solution composed of: 127 NaCl, 1.9 KCl, 1 MgCl2, 2 CaCl2, 1.3 K H2PO4, 26 NaHCO3, 10 D-glucose, pH 7.4 at 28-30°C.Tetrodotoxin, picrotoxin, and L-689,560 were present in the recording solutions to isolate mEPSCs.To induce mGluR-dependent synaptic depression,-3,5-dihydroxyphenylglycine, was bath applied for 10 min.Neurons were visualized using IR-DIC optics with an Olympus BX51W1 microscope and Hitachi CCD camera at a total magnification of 400X.Whole-cell patch clamp recordings were made from transfected pyramidal neurons using patch pipettes made from thick walled borosilicate glass filled with: 135 potassium gluconate, 7 NaCl, 10 HEPES, 0.5 EGTA, 10 phosphocreatine, 2 MgATP, 0.3 NaGTP, pH 7.2, 290 mOsm.Recordings of mEPSCs were obtained at a holding potential of −75 mV using an Axon Multiclamp 700B amplifier, filtered at 3 kHz and digitized at 20 kHz.Data acquisition was performed using pClamp 10.Analysis of mEPSCs was performed using MiniAnalysis software.Events were manually analyzed and were accepted if they had an amplitude >5 pA and a faster rise than decay.Statistical significance was measured using a one-way ANOVA with 0.05% taken as significant.
Neuronal activity regulates the transcription and translation of the immediate-early gene Arc/Arg3.1, a key mediator of synaptic plasticity. Proteasome-dependent degradation of Arc tightly limits its temporal expression, yet the significance of this regulation remains unknown. We disrupted the temporal control of Arc degradation by creating an Arc knockin mouse (ArcKR) where the predominant Arc ubiquitination sites were mutated. ArcKR mice had intact spatial learning but showed specific deficits in selecting an optimal strategy during reversal learning. This cognitive inflexibility was coupled to changes in Arc mRNA and protein expression resulting in a reduced threshold to induce mGluR-LTD and enhanced mGluR-LTD amplitude. These findings show that the abnormal persistence of Arc protein limits the dynamic range of Arc signaling pathways specifically during reversal learning. Our work illuminates how the precise temporal control of activity-dependent molecules, such as Arc, regulates synaptic plasticity and is crucial for cognition. Ubiquitin-dependent proteasomal degradation of Arc tightly limits its temporal expression, yet the significance of this regulation remains unknown. We generated a mouse disrupting the temporal control of Arc removal (ArcKR). ArcKR mice exhibit enhanced mGluR-LTD and have impaired cognitive flexibility.
434
Towards acoustic metafoams: The enhanced performance of a poroelastic material with local resonators
Protection against noise pollution is nowadays a compelling problem due to progressing industrialisation and omnipresent sources of disturbing and harming sounds.In particular, low frequency noise attenuation still awaits for an efficient solution, preferably based on a material suitable for mass production.In terms of sound absorption, porous materials like acoustic foams, fiber glass or mineral wool are usually quite efficient due to their large microstructural air-solid interface area, which results in high viscothermal dissipation.However, the efficiency of these materials for low frequencies is significantly lower than for mid and high frequency sounds.Moreover, studies investigating the relationship between the microstructure of conventional acoustic foams and their performance show that the shift of the absorption peak towards lower frequencies by means of a detailed pore design or by increasing the reticulation rate is rather limited.An alternative to improvements at the microstructural level is to modify the materials at a scale larger than the pore size.It has been observed that meso-perforations introduced to a porous material may be beneficial for its acoustic performance at low frequencies on double porosity media and the references therein).Embedding other types of meso-scale inclusions in foam-like materials has been explored for the same purpose.A distribution of Helmholtz resonators in the bulk of a poroelastic material has been introduced in Lagarrigue et al. and Doutres et al., and was recently optimised in Park et al.Note that, in order to achieve low frequency performance of such panels, the sizes of resonating cavities need to be rather large.Moreover, inspired by the concept of phononic crystals, a periodic arrangement of metal rods has been proposed for example in Weisser et al. and a double porosity material with an array of mass inclusions has been reported in Cui and Harne.Although such solutions, based on meso-scale modifications of porous materials, can offer an improvement of the attenuation level at low frequencies, they also require significantly different designing and manufacturing steps.Acoustic metamaterials are another class of materials that are promising in the context of low frequency sound attenuation.The dedicated microstructural design of these materials results in higher attenuation level relative to what is achieved with traditional materials.A number of metamaterials achieve their extraordinary properties through a local resonance phenomenon.The prime manufactured example of a solid material generating so called band gaps and acting as a total wave reflector due to resonating inclusions, which effectively prohibits the propagation of waves at certain frequencies, has been reported rather recently in Liu et al.Boutin and Becot have proposed a material entirely composed of Helmholtz resonators, also achieving an improvement in sound attenuation.This concept was further developed in Griffiths et al., proposing a porogranular material design with resonators made of soft elastomer shells.Recently, the presence of shear wave band gaps and left-handed behaviour of a cellular material entrained with water have been demonstrated experimentally in Dorodnitsyn and Van Damme, where the dispersive properties of the material are achieved through resonance of the lattice walls.A special type of acoustic metamaterials are the structures incorporating membrane resonance.Such materials are typically designed to improve absorption performance as is the case for a “dark” acoustic metamaterial or in the work of Ma et al. where hybrid resonance of a decorated membrane is exploited.Yang et al. have proposed a combination of a membrane-type and a Helmholtz resonator, showing that such degenerate resonators, when properly coupled, can also serve as an effective absorber.Recently, atypical acoustic behaviour of rigid permeable materials with thin elastic membranes embedded in a rigid microstructure has been demonstrated numerically and experimentally in Venegas and Boutin.In many studies concerning metamaterials, especially those involving fluid structure interaction, material losses are either not included in the model or incorporated in a simplified way through a phenomenological parameter.For some materials, incorporating the influence of realistic damping occurring through thermal and viscous dissipation is not just a refinement of the model but may be crucial for the correct predictions of their response.For instance, in the study by Molerón et al., focusing on a metamaterial consisting of rigid slabs embedded in air, the inclusion of fluid losses changed the predicted acoustic behaviour from perfect transmission to perfect reflection.Furthermore, Henríquez et al. have reported that viscothermal dissipation in the air, even for geometries much larger than the boundary layer thicknesses, may completely suppress the double negative behaviour of the analysed rigid periodic structure.In this paper, the acoustic performance of a poroelastic material enriched with resonators embedded in the pores is investigated.Unlike previous studies aiming at optimising the pore geometry and morphology for low frequency performance, here, a major change in terms of micro-dynamical phenomena is introduced.The micro-resonator is represented by a cantilever beam with a heavy mass attached at its tip, which represents a particle embedded in the pore during the manufacturing process.Bloch analysis performed for the proposed unit cell design predicts a significant attenuation in the low frequency range for both fast and slow compressional waves propagating through the system.This behaviour is also confirmed by a transmission analysis of a finite size set-up by direct simulations.Moreover, it is shown that the shear viscosity of the fluid is crucial for revealing the resonance-related attenuation mechanisms.In order to numerically demonstrate the concept of embedding local resonators within the pores of poroelastic material, a simple cubic unit cell is used.Supporting the recent progress in innovative approaches to manufacturing poroelastic/cellular materials, this work contributes towards the development of a new type of foams: acoustic metafoams.The paper is organised as follows.In Section 2, the unit cell and the modelling approach are described.In Section 3, the simulation results are shown, including the analysis of the dispersion diagrams and transmission studies of the response of a single unit cell as well as a finite size configuration.In Section 4 the main results are further discussed, after which conclusions are presented.The microstructure of the unit cell consists of fluid and solid phases which are described in the frequency domain.Conventional descriptions for each component are adopted from Gao et al. and Yamamoto et al.The coupling between the domains is prescribed in the following way.At the interface between the viscothermal fluid and the solid, continuity of velocities and tractions is assumed, along with an isothermal condition.Continuity of tractions and normal acceleration is adopted at the interfaces between the solid and the inviscid isothermal fluid as well as between the viscothermal and inviscid isothermal fluid.At the latter interface, also adiabatic conditions for the temperature are assumed.The material parameters adopted for the fluid and solid domains are presented in Tables 1 and 2, respectively.In this section, the numerical results are presented.First, the behaviour of an infinite periodic arrangement of unit cells is assessed based on the Bloch analysis.Next, the acoustic performance of a finite material sample is studied using the transmission set-up.Fig. 3 shows the dispersion diagrams obtained for the three considered 3D unit cell geometries, of Fig. 1, using an inviscid isothermal fluid.The wave polarisations are identified by the ratio between the amplitudes of the displacement in the solid along the x axis and the total displacement in the solid, both integrated over the solid domain.The compressional and shear wave polarisations are then distinguished by colours varying from red to blue, respectively.As can be seen in Fig. 3, two shear and two compressional waves propagate in the material consisting of 3D unit cells with the solid and fluid phases, where two shear waves relate to the two perpendicular displacement directions, and two compressional waves correspond to the in-phase and out-of-phase motion of the fluid and solid, respectively.Dispersion curves for the inviscid cases without the resonator and with the light resonator presented in Fig. 3a overlap in the considered frequency range and do not exhibit any dispersive or dissipative effects.Yet, significant dispersion can be observed for the case with the heavy resonator in Fig. 3b.A band gap is formed in the frequency range 445–560 Hz for two wave polarisation: the slow compressional wave and one of the shear waves, as evident from the absence of the dispersion curves in the real plane and high imaginary values of the wavenumber.It should be emphasised that due to the presence of the two other wave types inside the band gap region, a complete band gap is not formed.Note, that due to the small dimensions of the unit cell, the dispersive effects related to the cavity resonance for the wave type L1 occur at much higher frequencies, exceeding the considered range.Fig. 4 presents the 3D band structure and its 2D projections obtained for the geometry with the heavy resonator when a viscous fluid is considered.As a reference, the lossless case discussed previously is also shown in the graph.The viscosity of the fluid has a significant influence on the dispersion curves.In particular, both compressional waves are attenuated in the considered frequency range, with attenuation peaks located around 440 Hz.The 3D view of the band structure reveals how both dispersion curves associated with these waves bend towards the complex wavenumber domain, exhibiting a spin in the band gap region, characteristic for dissipative systems.The shear wave S2 now shows a slight broadening of the attenuation regime in the viscous case.No influence of the air viscosity can be observed for the other shear wave polarisation.In order to assess the effect of the resonance on the attenuation performance of the unit cell with the heavy resonator entrained with viscous fluid, the corresponding dispersion curves obtained are compared with those calculated for the unit cell with a light resonator.Fig. 5 shows that for low frequencies, the fast compressional wave reveals a higher level of attenuation in the case with the heavy resonator.On the other hand, for frequencies above 450 Hz the attenuation factor is higher for the light resonator.An attenuation peak located around frequency 440 Hz, which is observed for the slow wave with the heavy resonator, is not present in the case of the light resonator, for which the attenuation is slightly lower at the higher frequencies as well.The attenuation of the compressional waves shown in the dispersion diagrams can be associated with the dynamic behaviour of the unit cell.Therefore, in Fig. 6 the velocity fields in the longitudinal direction at the mid-plane cross section for different points, are presented for both unit cells.The mode shape for the fast wave, for which the fluid motion is in-phase with the solid, shows high velocity gradients occurring around the membrane opening, if the light resonator is considered.For the heavy resonator unit cell, the resonating cantilever significantly enhances the velocity gradient field, in particular, through its out-of-phase oscillations.Analogous effects can be observed for the slow wave.The presence of the heavy resonator contributes to the higher attenuation level by, enhancing both dissipative effects and increasing reflection, as will be demonstrated in the following sections.In Figs. 7 and 8, the band structures obtained for the unit cell with heavy resonators and viscous fluid with different viscosities are presented.For clarity, only the compressional waves are shown and the distinction between fast and slow waves is introduced using colours, where red and green colours denote fluid-born and solid-born waves, respectively.Intermediate colours reflect the ratio between the fluid and solid velocities in the longitudinal direction integrated over the front face of the unit cell and averaged over the solid and fluid parts of this surface, allowing to identify the in-phase and nearly out-of-phase motion between fluid and solid.The dispersion analysis allows to assess the wave propagation and attenuation in the infinite material domain.In order to analyse the behaviour of finite structures and distinguish between the mechanisms underlying the wave attenuation, a transmission calculation is conducted using the numerical set-up detailed in Section 2.4.In this section, the performance of a single unit cell as a material sample is investigated.Fig. 11 shows the acoustic properties of a single unit cell with viscothermal losses for the unit cells with and without resonators.The heavy resonator cell reveals, a transmission dip at 440 Hz, which is not visible for the case with a light or no resonator.Based on the reflection and absorption curves, it can be stated that at this frequency part of the energy is dissipated within the fluid and a similar part is reflected.In Fig. 12, the velocity fields at the mid-plane cross-section of the three unit cells are depicted.The unit cells without the resonator and with the light resonator behave quite similar.A minor increase of the velocities is present around the membrane opening, and additionally around the resonator tip for the corresponding unit cell.The presence of the heavy resonator significantly changes the response of the unit cell.At a frequency of 440 Hz, the elastic cantilever resonates and high velocity gradients can be observed within the fluid.Note, that this velocity field qualitatively resembles the one obtained through the dispersion analysis, suggesting that the dominant role for the transmission analysis in the considered set-up is played by the fast compressional wave.As a consequence, a high level of viscous dissipation is obtained, as well as an increase in the acoustic impedance, resulting in a reflection peak, see Fig. 11b.In Fig. 13, the acoustic properties obtained for the unit cell with a heavy resonator and viscothermal fluid are compared with those obtained for the inviscid isothermal fluid).A clear difference between both unit cells emerges.The transmission dip observed with the viscothermal fluid is not present in the analysis without viscothermal losses.Only an isolated reflection peak is found at 550 Hz, which can be associated with the global eigenfrequency of the finite system.Note, that the reflection dips are present in both cases at the frequency 560 Hz, which is the frequency closing the band gap in Fig. 3c. Naturally, no absorption is observed if the losses are not considered.The small opening ratio of the membrane in this study has been used in order to induce the viscothermal dissipation at low frequencies.In Fig. 16, transmission, reflection and absorption for different opening ratios are shown, from fully open to closed unit cells.It is clear that for the high attenuation to occur, a sufficiently small membrane opening is required, since the increase of the membrane opening size reduces both reflection and absorption levels.On the other hand, the presence of small membrane openings results in broadening of the absorption zone also beyond the regime of local resonance.In this section, results obtained for the transmission set-up with multiple unit cells as a material sample are presented.In Fig. 17, transmission, reflection and absorption are shown for a row of ten unit cells with heavy resonators and a viscothermal fluid with the viscosity of air.The performance of the unit cells with light resonators is also depicted.In analogy to the single unit cell study, a transmission dip around 440 Hz can be observed, which is not present in the case without added mass.Moreover, in this case, the attenuation range spans a broader frequency range.Note, that the behaviour of the transmission spectra for both light and heavy resonators can be directly related to the attenuation factor for the fast compressional wave shown in Fig. 5.According to Fig. 17b, the main mechanism underlying the reduction in transmission is reflection, since a high reflection peak exceeding 0.8 emerges around 440 Hz.The reflection peak is followed by a reflection dip, which explains the increase of transmission observed around 560 Hz.Such a behaviour is typical for locally resonant materials, where at the end of the band gap region there is again a high transmissibility; Liu et al.).The absorption performance of the multiple cell set-up is rather moderate.Moreover, the absorption peak is shifted to higher frequencies in comparison with the single cell behaviour, which is a result of the strong reflection at the resonance frequency.Among the different dissipative mechanisms occurring in the unit cells, the major role is played by viscous losses, as can be seen in Fig. 18, where the lines denoting total absorption and viscous dissipation practically overlap.Indeed, as stated in Gao et al., for small unit cell sizes, thermal dissipation is highly reduced.This justifies why this contribution was neglected in the Bloch analysis.In Fig. 19, the acoustic performance of set-ups of different sizes is shown.The increase in the number of unit cells results in an increase of the wave attenuation in the frequency range around 440 Hz.An increase of the reflection can be observed which is accompanied by a decrease and a shift of a broadened absorption peak.The results of this analysis show that the proposed microstructural configuration, in spite of its simplicity clearly indicates a potential way for improving the acoustic attenuation of foams by combining the mechanisms of viscothermal dissipation with local resonance.This combination constitutes a pathway towards the design of acoustic metafoams.In this study, based on transmission, reflection and absorption spectra, as well as complex dispersion diagrams, the behaviour of foam-like unit cells with and without resonators has been assessed, showing that the presence of resonating masses locally improves the noise isolation properties of the material.The enhancement of the sound attenuation is based on the increase of both reflection and absorption properties due to the specific design of this coupled solid-fluid system.Moreover, by increasing the number of cells in the material, the reflection mechanism becomes the dominating one.To obtain the desired transmission dip, it is necessary to properly incorporate the losses in the fluid, i.e. a realistic viscothermal description of air needs to be adopted.A complex description of the fluid domain is fully justified since the characteristic lengths of the pores are comparable with the thicknesses of the viscous and thermal boundary layers.Coupling between the fluid and solid domains plays an important role in inducing the resonance and amplifying viscothermal dissipation.Although the idea of designing a dynamic absorber using tuned resonators introduced by Den Hartog, is well known in the literature, the concept proposed here goes clearly beyond that and relies on the more complex interaction between solid and fluid including viscous effects resulting from the non-slip interface condition.Particularly, due to the presence of the viscous stresses, the solid and fluid domains are strongly coupled and the local resonance has a visible effect on the performance of the material.As demonstrated through the dispersion analysis, the combination of the local resonance of the solid and fluid viscosity allows for the attenuation of the fast compressional wave, which is key for the development of the next generation acoustic porous materials.This study has also identified the main parameters governing the attenuation performance of acoustic metafoams, namely the characteristic size of the pores, the membrane opening ratio and the viscosity of the fluid.The characteristic pore size determines the frequencies at which absorption can be expected.The opening ratio influences the attenuation performance.In particular, small membrane openings broaden the absorption peak, but they should be sufficiently small for obtaining high attenuation levels.Finally, the fluid viscosity is essential for enhancing attenuation by ensuring a strong coupling between solid and fluid phases.Depending on other material and geometrical parameters, an optimal level of viscosity exists for which dissipation in the fluid is enhanced by sufficiently high amplitudes of the local resonator).In other words, a viscosity value exists for which the attenuation factor for the fast compressional wave is the highest.Therefore, an optimal design of the geometry could be pursued, by controlling the characteristic dimensions of the unit cell and the viscosity of the fluid such that the effect of the attenuation is maximised.Intrinsic parameters of the resonators can also be varied in order to tune the operating frequency range, as done for purely solid metamaterials e.g. in Krushynska et al.Moreover, material damping of the solid should also be taken into account, since it directly influences the efficiency of the local resonator as demonstrated e.g. in Krushynska et al.In terms of applications, the microstructural design proposed in this paper can be used to improve the performance of porous materials by the distribution of resonating particles inside their pores.This work might open new paths for designing porous materials, especially considering recent advancements in the control of foam manufacturing processes and the developments in 3D printing techniques for cellular materials.Limited by the computational resources for direct simulations of multiphase lossy material with fine geometrical features, the performance of up to fifteen identical unit cells in a row has been analysed, whereas the typical thickness of foam panels is of the order of several centimetres using high-performance computing tools e.g. van Tuijl et al., these larger thickness could be reached.On the other hand, based on the studies of band structures, the behaviour of an infinite periodic material has been assessed, leading to consistent conclusions.For the proposed unit cell geometry, the observed transmission dip is followed by a transmission peak, which is typical for locally resonant acoustic metamaterials and coherent with the attenuation spectra obtained based on Bloch analysis.This fact implies that the proposed idealised structure is specifically effective for wave attenuation at a selected low frequency range.A random microstructure with non-uniform resonators, however, may have the potential of providing even broader frequency attenuation ranges.This would require further analyses which will be the subject of future investigations.This paper presented an acoustic metafoam concept, based on a poroelastic unit cell with an embedded resonating mass.The acoustic attenuation at low frequencies has been improved by combining the effects of viscothermal dissipation in the fluid with local resonance of the solid.In contrast to many previous studies, the resonators are introduced within the pore and in the form of a resonating particle at the tip of an elastic micro-cantilever.Analysis of complex dispersion diagrams and numerical transmission spectra showed that the proposed unit cell enriched with a resonator performs significantly better at low frequencies than its light or non-resonating equivalent.The resulting transmission dip is profound even for a single unit cell.However, due to the resonant origin of the attenuation the affected frequency range remains limited.It has been shown that the enhanced attenuation only emerges if viscothermal losses in the fluid are included.This underlines the role of the fluid-solid coupling due to which not only the local resonance is induced but also the viscous dissipation is increased.The complex fluid description is another feature distinguishing this work among other metamaterials involving acoustic-structure design, where a phenomenological description of losses is usually sufficient.This work contributes to a novel design towards acoustic metafoams and to the development of new porous materials, with improved performance at low frequencies.
Acoustic foams are commonly used for sound attenuation purposes. Due to their porous microstructure, they efficiently dissipate energy through the air flowing in and out of the pores at high frequencies. However, the low frequency performance is still challenging for foams, even after optimisation of their microstructural design. A new, innovative, approach is therefore needed to further improve the acoustic behaviour of poroelastic materials. The expanding field of locally resonant acoustic metamaterials shows some promising examples where resonating masses incorporated within the microstructure lead to a significant enhancement of low frequency wave attenuation. In this paper, a combination of traditional poroelastic materials with locally resonant units embedded inside the pores is proposed, showing the pathway towards designing acoustic metafoams: poroelastic materials with properties beyond standard foams. The conceptual microstructural design of an idealised unit cell presented in this work consists of a cubic pore representing a foam unit cell with an embedded micro-resonator and filled with a viscothermal fluid (air). Analysis of complex dispersion diagrams and numerical transmission simulations demonstrate a clear improvement in wave attenuation achieved by such a microstructure. It is believed that this demonstrates the concept, which serves the future development of novel poroelastic materials.
435
The orexigenic hormone acyl-ghrelin increases adult hippocampal neurogenesis and enhances pattern separation
A relationship between metabolic state and cognition is well established.Consistently, calorie restriction has been shown to exert benefits on the brain.CR enhances cognitive performance on tasks in rodents, is neuroprotective in animal models of ageing and neurodegenerative disease, and improves memory in humans.However, the mechanisms underlying the beneficial neuroprotective and cognitive enhancing effects of CR are only beginning to be elucidated.The NAD-dependent protein deacetylase sirtuin-1 mediates, at least in part, the cellular effects of CR by increasing autophagy and related processes.Activation of the SIRT1 signalling pathway promotes cognition and has also been shown to mediate the anti-apoptotic and orexigenic actions of the hormone, ghrelin.Therefore, circulating levels of ghrelin, which is secreted from the stomach during periods of CR, may link energy balance and cognition.Whilst predominately known for its growth hormone releasing and orexigenic properties, the list of functions and biological effects produced by the peptide continue to be identified.Not only does ghrelin act in the pituitary and hypothalamus to regulate energy homeostasis, appetite, body weight, and adiposity, but recently the extra-hypothalamic actions of ghrelin, such as pro-cognitive, antidepressant, and neuroprotective properties have also been identified.Intracranial infusions and systemic ghrelin treatments at supra-physiological doses have beneficial mnemonic effects.The cognitive enhancing effects of ghrelin were replicated using two structurally non-peptide ghrelin receptor agonists.Additionally, ghrelin treatments have been shown to affect measures of hippocampal synaptic plasticity and increase hippocampal cell proliferation and neurogenesis, both within and outside the context of a memory task, placing ghrelin in a unique position to connect metabolic state with hippocampal neurogenesis and cognition.However, the effects of physiological levels of ghrelin are less well understood.Hippocampal neurogenesis is a unique form of plasticity that results in the generation of functionally integrated new neurons from progenitor cells in the dentate gyrus.Several studies indicate that these new cells make distinct contributions to learning and memory, and may be particularly important for the ability to separate highly similar components of memories into distinct complex memory representations that are unique and less easily confused, a process referred to as “pattern separation”.Although pattern separation refers to the theoretical computational mechanism involving the transformation of an input representation into an output representation that is less correlated, which has been studied effectively using electrophysiology, behavioural tasks have been developed to assess the use of such representations and demonstrate the relevance of pattern separation to cognition and behaviour.Using a modified Spontaneous Location Recognition task—in which the load on pattern separation was varied according to the distance between landmarks—we recently found that rats with inhibited DG neurogenesis demonstrated a separation-dependent impairment.In the ‘small separation condition’ rats with inhibited DG neurogenesis were impaired and unable to discriminate between the familiar and novel locations, whereas in the ‘large separation condition’, the rats were unimpaired.To investigate whether increasing circulating levels of acyl-ghrelin, within the physiological range, could increase DG neurogenesis and lead to lasting effects on neurogenesis-dependent mnemonic processes, rats were given daily injections of either saline or acyl-ghrelin on days 1–14 prior to assessing spatial pattern separation using SLR on days 22–26.As the final injection of acyl-ghrelin was given 8 days before the start of cognitive testing, any observed effects could not be attributed to the exogenous peptide being “on board” during behavioural testing.The results revealed that rats treated with acyl-ghrelin, but not those injected with saline, demonstrated increased numbers of new adult-born neurones and enhanced performance on the SLR task.The results are in keeping with the finding that elevating adult hippocampal neurogenesis is sufficient to improve pattern separation.All procedures were in strict compliance with the guidelines of the University of Cambridge and Home Office for animal care.Twenty four male Lister Hooded rats were housed in groups of four on a 12-h light cycle.All procedures were performed during the dark phase of the cycle.All rats were provided with ad libitum access to water and food, except during behavioural testing when food was restricted to 16 g per day for each animal to maintain weight at 95–100% free-feeding weight.Rats were handled for 2 consecutive days prior to the start of daily injections.Acyl-ghrelin peptide was dissolved in physiological saline at a concentration of 12 μg/ml.This dose of acyl-ghrelin was chosen as it has previously been shown to increase food intake and elevate plasma ghrelin concentrations to similar levels as a 24 h fast in rats.5′-Bromo-2-deoxyuridine was dissolved in physiological saline, 1 ml of NaOH and heated to 40–50 °C at a concentration of 20 mg/ml.Fig. 1A illustrates the timing of the injections.Rats were given daily intra-peritoneal injections of either saline or ghrelin on days 1–14 and BrdU on days 5–8 prior to assessing spatial pattern separation using SLR on days 22–26.Injections were performed at the same time each day.Behavioural testing was conducted in a black plastic circular arena covered with bedding and situated in the middle of a dimly lit room.The testing room had three proximal spatial cues and distal standard furniture.Objects used for testing were tall cylinder containers ∼20 cm in height.To prevent the rats from moving the objects during exploration, Blu-tack™ was used to secure the objects in place.Objects were wiped down with 50% ethanol solution between sessions.A digital camera recorded the testing sessions.Details of the SLR task have been previously published.Unlike other tests of pattern separation that use discrete trial procedures, SLR uses a continuous variable as a measure of performance, which yields sufficient data within a single trial to allow manipulations at different stages of memory.Our modified paradigm, enables us to manipulate the similarity of locations at the time of encoding/consolidation, when pattern separation is thought to occur, rather than at retrieval like others tasks used to assess pattern separation.All rats were habituated in 5 consecutive daily sessions in which they were allowed to explore the empty circular arena for 10 min.Testing began 24 h after the fifth habituation session.As illustrated in Fig. 1B, each trial consisted of two phases.During the sample phase, three identical objects were placed 15 cm from the edge of the open field and 30 cm from the centre.Manipulating the separation between objects allowed for the distinct load of pattern separation to differ between conditions.In the small separation condition, two of the objects were separated by a 50° angle and the third at an equal distance from the other two.In the extra-small separation condition, two of the objects were separated by a 40° angle and the third at an equal distance from the other two.Control animals perform at chance level in the extra-small separation condition, so it is used to avoid any ceiling effect when assessing enhancements.For both conditions, rats were allowed to explore the arena and objects for 10 min during the sample phase and then placed back into their home cage for a 24-h delay.During the choice phase, rats were presented with 2 new identical copies of the objects previously used during the sample phase.A4 was placed in the previous position of A1.A5 was placed in between the sample placements of A2 and A3.Animals were allowed to explore the chamber and objects for 5 min before being returned to their home cage.All rats were tested on both the small and extra-small conditions, which were counterbalanced within groups.In both the sample and choice phases, exploration of an object was defined as a rat directing its nose to an object at a distance of 2 cm or less.Sitting on the object or digging at the base of the object was not considered exploratory behaviour.For the sample phase, the experimenter recorded exploration using stopwatches.For the choice phase, the experimenter scored exploration using a computer program JWatcher_V1.0, written in Java™.The program had two keys corresponding to the two objects.Exploration was recorded by pressing the appropriate keys at the onset and offset of a bout of exploration.Tissue preparation.Following behavioural testing, rats were anaesthetized by i.p. injection of Euthatal and perfused transcardially with phosphate buffered saline, followed by 4% neutral buffered formalin.The brains were removed and post-fixed in formalin for at least 24 h, followed by immersion in a 30% sucrose solution for at least 48 h. Coronal sections were cut along the entire rostro-caudal extent of the hippocampus using a freezing-stage microtome and collected for free-floating immunohistochemistry.Immunohistochemistry.All experiments were performed on free-floating sections at room temperature unless otherwise stated.For BrdU+/NeuN+, sections were washed three times in PBS for 5 min, permeabilised in methanol at −20 °C for 2 min and washed prior to pre-treatment with 2 N HCl for 30 min at 37 °C followed by washing in 0.1 M borate buffer, pH 8.5, for 10 min.Sections were washed as before and blocked with 5% normal goat serum in PBS plus 0.1% Triton for 60 min.Sections were incubated overnight at 4 °C in mouse anti-BrdU, washed as before and incubated in goat anti-mouse AF-568 for 30 min in the dark.Sections were washed again prior to a 1 h incubation in mouse anti-NeuN diluted in PBS–T. Following another wash the sections were incubated with goat anti-mouse AF-488 in PBS–T for 30 min in the dark.After another wash sections were mounted onto superfrost+ slides with prolong-gold anti-fade solution.For BrdU+/Sox2+/S100β+, sections were treated identically to the BrdU+/NeuN+ IHC described above, with the exception that sections were first blocked using 5% normal donkey serum in PBS–T for 30 min and subsequently blocked using 5% NGS in PBS–T for 30 min.Also, primary antibodies were applied as a cocktail that included rat anti-BrdU, rabbit anti-Sox2 and mouse anti-S100β in PBS–T overnight at 4 °C.Similarly, secondary antibodies were also applied as a cocktail that included donkey anti-rat AF488, donkey anti-rabbit AF568 and goat anti-mouse AF405 in PBS–T for 30 min in the dark.Brain sections were mounted as described above.For DAB-immunohistochemical analysis of DCX and BrdU labelling, sections were washed in 0.1 M PBS and 0.1 M PBS–T.For BrdU-DAB analysis, sections underwent acid treatment and neutralization as described above.Subsequently, endogenous peroxidases were quenched by washing in a PBS plus 1.5% H2O2 solution for 20 min.Sections were washed again and incubated in 5% NDS in PBS–T for 1 h. Sections were incubated overnight at 4 °C with goat anti-doublecortin or mouse anti-BrdU in PBS–T and 2% NDS solution.Another wash step followed prior to incubation with biotinylated donkey anti-goat or biotinylated donkey anti-mouse in PBS–T for 70 min.The sections were washed and incubated in ABC solution for 90 min in the dark prior to another two washes in PBS, and incubation with 0.1 M sodium acetate pH6 for 10 min.Immunoreactivity was developed in Nickel enhanced DAB solution followed by two washes in PBS.Sections were mounted onto superfrost+ slides and allowed to dry overnight before being de-hydrated and de-lipified in increasing concentrations of ethanol.Finally, sections were incubated in histoclear and coverslipped using entellan mounting medium.Slides were allowed to dry overnight prior to imaging.Imaging and quantification.A one-in-twelve series of 30 μm sections from each animal was immunohistologically stained and imaged using a fluorescent microscope or LSM 710 META upright confocal microscope.BrdU+/NeuN+ immunoreactive newborn adult neurons were manually counted bilaterally through the z-axis using a 40× objective and throughout the entire rostro-caudal extent of the granule cell layer.Resulting numbers were divided by the number of coronal sections analysed and multiplied by the distance between each section to obtain an estimate of the number of cells per hippocampus.For quantification of stem cell self-renewal, one hundred BrdU+ cells were assessed for co-expression with Sox2 and S100β within the SGZ of the DG in each brain.The resulting numbers were expressed as a percentage of new stem cells, new astrocytes or new ‘other’ cells.DAB-stained sections were imaged using a Nikon 50i microscope and analysed using Image J software.All experiments were performed in a blinded manner.For the behavioural analyses, SLR sample data was analysed using a one-way analysis of variance to ensure the three sample objects were being explored equally.Results from the choice phases were expressed as discrimination ratios, calculated as time spent exploring the object in the novel location minus the time spent exploring the object in the familiar location divided by total exploration time.Group mean D2 scores were analysed with repeated measures ANOVA, followed by post hoc contrasts with Bonferroni correction.For the histological analyses, statistical analyses were carried out using GraphPad Prism 6.0 for Mac."Statistical significance was assessed by unpaired two-tailed Student's t-test or one-way ANOVA with Bonferroni's post hoc test unless described otherwise: *p < 0.05, **p < 0.01, and ***p < 0.001.Pearson correlation and linear regression analysis were used to determine the goodness-of-fit between number of new adult-born neurons and pattern separation-dependent memory performance.To investigate how increases in neurogenesis affect spatial pattern separation, we treated rats with daily injections of either acyl-ghrelin or saline and used the SLR task to evaluate the effects on memory consolidation when the pattern separation load was moderate or high.Fig. 1C shows that both the saline group and the acyl-ghrelin group showed a preference for the novel location in the small separation condition, whereas only the acyl-ghrelin-treated group showed a preference for the novel location in the extra-small condition.A two-way repeated measures ANOVA revealed a significant interaction of treatment × separation = 8.003).Post hoc contrasts revealed a significant effect of separation in the saline-treated group, but not in the acyl-ghrelin-treated group.There was a significant difference between the saline and acyl-ghrelin groups in the extra-small condition, but no difference between groups in the small separation condition.During the sample phase, both saline- and acyl-ghrelin-treated rats spent equal amounts of time exploring each of the 3 objects.This indicates that the differences in discrimination ratios cannot be explained by preferential exploration of the more separated location during the sample phase.There was no main effect of treatment or condition on the proportion of time spent exploring the each sample object.Total time exploring also did not differ between treatment groups or conditions.During the test phase, there was also no difference between total exploration times, suggesting that treatment did not affect motivation to explore during the sample or test phase.To examine whether daily acyl-ghrelin injections increase neurogenesis in the DG, we performed a BrdU-pulse chase experiment and counted immunolabelled neurons in the DG.Subsequent analysis showed that acyl-ghrelin treatment significantly increased the total number of new adult-born neurons in the DG.Further analysis revealed that this increase was specific to new neuron formation in the rostral DG rather than the caudal DG.Consistent with this finding, improved cognitive performance in the SLR task was correlated with an increase in the number of new neurons in the rostral DG.Furthermore, there was a 35% increase in the number of immature neurons in the DG 14 days after the final acyl-ghrelin injection.Similarly, analysis of total BrdU+ cell number using a DAB-based IHC approach revealed a 25% increase in the DG of acyl-ghrelin-treated rats, thereby providing further evidence of enhanced neurogenesis.However, acyl-ghrelin did not alter BrdU+ cell number in the hilus or promote the rate of neuronal lineage differentiation in the DG compared to saline control.Notably, the rate of stem cell self renewal and new astrocyte cell formation were quantified throughout the rostro-caudal extent of the SGZ and showed that acyl-ghrelin did not significantly effect either new stem or new astrocyte cell numbers in the hippocampal niche.In this study, we investigated the long-term mnemonic effects of increasing adult neurogenesis with daily acyl-ghrelin injections.Using the DG neurogenesis-dependent spatial task, SLR, we evaluated the performance of rats on the small and extra-small SLR conditions, which vary the demand for pattern separation.In support of our hypothesis, the results revealed that peripheral treatment with physiological amounts of acyl-ghrelin increased neurogenesis in the DG and also improved spatial pattern separation.The results are in keeping with the finding that elevating adult hippocampal neurogenesis is sufficient to improve pattern separation and that ghrelin administration can affect spatial cognition.The data are also consistent with the notion that the rostral hippocampus is primarily important in performing cognitive functions.To our knowledge, this is the first research to look at long-term mnemonic effects of pre-testing physiological acyl-ghrelin administration, and the first demonstration that acyl-ghrelin enhances spatial pattern separation via a mechanism consistent with elevated adult hippocampal neurogenesis.In the small separation condition, there was no difference in performance between the saline-treated and ghrelin-treated rats.There was a difference, however, in the extra-small separation condition, which positioned the landmarks closer together, thus increasing the requirement for the use of less overlapping, unique representations.Specifically, consistent with previous reports, the saline-treated rats did not show a preference for the novel location in the extra-small condition, but showed a significant preference for the displaced object in the small condition.In contrast, the acyl-ghrelin-treated rats showed a preference for the displaced objects in both conditions.Furthermore, histological analysis confirmed that the acyl-ghrelin-treated rats had a 58% increase in the number of new adult-born neurons in the DG, compared to the saline-treated rats.Importantly, because the behavioural testing took place 22 days after the first acyl-ghrelin injection and 8–10 days following the final injection, these findings suggest that the increase in acyl-ghrelin produced long-lasting improvements in spatial processing that could not be attributed to the exogenous hormone being “on-board” during behavioural testing.Furthermore, acyl-ghrelin did not appear to have an effect on motivation because total exploration times during the sample phase and the test phase did not differ between treatment groups.The rationale behind the SLR task is that when objects are closer together it is more challenging to form representations that are distinct and resistant to confusion, than when objects are further apart.If representations are not sufficiently separated during encoding, then the presentation of a new intermediate location may activate the same memory representation and thus will not be distinguishable.Because we have shown that DG manipulations impair memory retention only in the case where similar but distinct spatial representations require pattern separation, there is strong evidence that SLR is a suitable and reliable task for studying pattern separation.The nature of the SLR task provides several advantages over other tasks used to study spatial pattern separation.The single trial nature, ability to manipulate similarity in a parametric way, identical choice phases in every condition, and the fact that it does not use rewards are all desirable qualities.However, as with other spontaneous tasks paired with pharmacological manipulations, one limitation is the possibility that the treatments changed non-mnemonic performance variables, such as the animals’ motivation to explore an environment or their preference for novelty.However, because the testing took place 8–10 days after acyl-ghrelin treatment was discontinued, it is unlikely that these changes in motivation accounted for the observed differences in discrimination ratios.Although the exact mechanisms underlying the acyl-ghrelin-induced enhancement of pattern separation remain to be determined, our results are in agreement with previous work suggesting an important role of neurogenesis.Previously published work by our group using the SLR task demonstrated that attenuating neurogenesis in the DG impaired performance on the SLR task.That previous study also demonstrated that intracranial infusions in the DG of brain derived neurotrophic factor, a small dimeric secretory protein with an important role in synaptic and structural plasticity in the adult brain, enhanced pattern separation in ways analogous to the acyl-ghrelin treatment in the present study.However this effect of BDNF was acute, whereas the effects of acyl-ghrelin were long-lasting.In contrast to our findings, as well as previously published data, Zhao et al. reported that daily systemic supra-physiological dose of ghrelin for 8 days increased neurogenesis but had no effect on spatial memory in mice, as measured by performance on a water maze task.This finding was not unexpected as previous studies looking at the relationship between enhancement of neurogenesis and spatial memory have provided mixed results, and it is possible that the source of variation is associated with the load on pattern separation.Hippocampal neurogenesis appears to be critically involved in spatial pattern separation, and how much the spatial conditions/tasks rely on this process, could determine the effects manipulating neurogenesis has on performance.Because performance on the SLR task is DG-dependent, and particularly sensitive to manipulations altering plasticity-related factors and neurogenesis, it is reasonable to suggest that the cognitive enhancing effect of acyl-ghrelin treatment may be a result of the increase in neurogenesis.As ghrelin has been shown to cross the blood brain barrier and act on the GHS-R1a in the DG, which is the only functional ghrelin receptor characterized, it is possible that increasing circulating acyl-ghrelin in the present study had direct effects in the DG."Moreover, ghrelin-receptor null mice exposed to chronic social defeat stress display more depressive-like behaviour and impaired hippocampal neurogenesis, therefore, providing further support for ghrelin's important role in this form of adult brain plasticity.However, it is important to recognize that although it is possible that acyl-ghrelin acted directly in the hippocampus, the indirect effects of acyl-ghrelin cannot be excluded.For example, ghrelin indirectly stimulates the production of insulin-like growth factor-1, which is known to increase neurogenesis.Future studies will need to address whether the beneficial effects result from ghrelin acting directly in the DG and depend on increased neurogenesis or other potential mechanisms.In addition to further elucidating the extra-hypothalamic effects of ghrelin, this research has potential clinical applications.Consistent with aged animal models demonstrating impairments in pattern separation, healthy older adults also show impaired memory performance and less efficient pattern separation, compared to younger adults.The pattern of impairment seen in adult humans is similar to that seen in animal models, in that greater dissimilarity is required for elderly participants to successfully encode information as distinct."Furthermore, neurodegenerative disorders often display coexisting metabolic dysfunction, and there are several converging lines of evidence linking altered metabolism with an increased risk of developing Alzheimer's disease and dementia.Notably, a high fat diet and obesity are associated with reduced circulating levels of acyl-ghrelin in rats and humans, respectively.In addition, obesity is associated with an increased risk of dementia in humans.Our data suggest an important mechanism whereby acyl-ghrelin may link metabolic and cognitive function.Elucidating the underlying mechanisms of this relationship holds promise for identifying modifiable lifestyle factors and novel therapeutic targets that might exert beneficial effects on the brain.In summary, the present study investigated the long-term effects of elevating systemic acyl-ghrelin treatment on spatial memory.To the best of our knowledge, we provide the first data demonstrating a previously unknown physiological function for a circulating hormone that is regulated by feeding, in enhancing adult hippocampal neurogenesis and promoting pattern separation dependent memory.This is the first step towards determining whether modulating ghrelin can lead to enhancements in cognition via alterations in neurogenesis.The funding sources had no role in the conduct of this research.
An important link exists between intact metabolic processes and normal cognitive functioning; however, the underlying mechanisms remain unknown. There is accumulating evidence that the gut hormone ghrelin, an orexigenic peptide that is elevated during calorie restriction (CR) and known primarily for stimulating growth hormone release, has important extra-hypothalamic functions, such as enhancing synaptic plasticity and hippocampal neurogenesis. The present study was designed to evaluate the long-term effects of elevating acyl-ghrelin levels, albeit within the physiological range, on the number of new adult born neurons in the dentate gyrus (DG) and performance on the Spontaneous Location Recognition (SLR) task, previously shown to be DG-dependent and sensitive to manipulations of plasticity mechanisms and cell proliferation. The results revealed that peripheral treatment of rats with acyl-ghrelin enhanced both adult hippocampal neurogenesis and performance on SLR when measured 8-10 days after the end of acyl-ghrelin treatment. Our data show that systemic administration of physiological levels of acyl-ghrelin can produce long-lasting improvements in spatial memory that persist following the end of treatment. As ghrelin is potentially involved in regulating the relationship between metabolic and cognitive dysfunction in ageing and neurodegenerative disease, elucidating the underlying mechanisms holds promise for identifying novel therapeutic targets and modifiable lifestyle factors that may have beneficial effects on the brain.
436
Involving Communities in the Targeting of Cash Transfer Programs for Vulnerable Children: Opportunities and Challenges
There is a growing policy emphasis in the field of international public health and development on the need for community involvement in health and development programs.Reflecting the community asset framework, the World Bank argues that, through the involvement of community members, a variety of local skills and abilities can be drawn upon in the implementation of social development programs, which, in turn, has the potential to improve local ownership of programs and increase their sustainability.Involving community members in the identification of beneficiaries of a cash transfer program may therefore, through its recognition and use of local resources and knowledge, facilitate a sense of local program ownership, in a way survey based targeting tools may not.Household censuses are frequently used to collect information for targeting social welfare programs & German Technology Cooperation, 2007; Robertson et al., 2013; Schubert & Huijbregts, 2006).The most vulnerable and/or poorest households can be identified by asking questions about socio-demographic characteristics of households or about household wealth.Collection of data on household assets, in census questionnaires, is a popular method for obtaining information about household wealth and thereby identifying poor households.This method makes use of simple questions and data on several household assets can be used together to create a wealth index by which households can be ranked and the poorest households thus identified.Direct observation of assets by the interviewer can reduce recall and social-desirability bias compared with other methods—e.g., data on household expenditure or income, which often vary significantly over short time periods and for which reporting may be influenced by social norms on the acceptability of discussing household wealth.Studies suggest that the extent to which asset-based wealth indices correlate with other indicators of poverty varies by country.A study using data from India, Pakistan, and Nepal found that asset-based wealth indices were associated with school-enrollment and could predict school-enrollment as accurately as household expenditure data.One advantage of using a population-based census is that it is relatively simple to ensure the systematic application of a standardized questionnaire across an entire population.An important disadvantage is that large-scale censuses are expensive and time-consuming to carry out.Furthermore, there are often few opportunities for community involvement in census-based targeting.If external definitions of vulnerability and poverty are used, communities may feel resentment toward the associated social welfare programs and it could cause conflict within the community.Alternative targeting methods that directly involve community members in the targeting process are one means of achieving community participation.For example, a group of community representatives could be responsible for identifying vulnerable households or could use census data in making the final decision about which households should be selected & German Technology Cooperation, 2007).Participatory wealth ranking is a method for involving communities in the selection of the poorest households.Meetings are held with community representatives to discuss the characteristics of households in different wealth categories.The representatives then use these categories and characteristics to rank the households in the community according to their wealth status and thus the poorest households can be identified.Community-based methods allow information about household wealth and vulnerability to be generated relatively quickly and cheaply.Studies from Tanzania and southern Zimbabwe found participatory wealth ranking data correlated well with wealth indices based on household-level agricultural wealth.However, Hargreaves et al. compared wealth indices based on a wider range of variables with data generated using participatory wealth ranking and found only limited agreement between the two methods for a population in rural South Africa.Cash transfer programs are social welfare interventions that aim to help households meet their basic needs and provide care for vulnerable children.In conditional cash transfer programs, beneficiary households must meet certain conditions, usually relating to school attendance and uptake of health services, in order to receive the transfers.Unconditional cash transfers are provided without conditions.National cash transfer programs in Latin America) use household-level means testing based on routinely collected data on income to target children living in the poorest households.In sub-Saharan Africa, these data are often unavailable.Programs in Zambia & German Technology Cooperation, 2007) and Malawi targeted “ultra-poor, labor-constrained households” by identifying households with high ratios of dependents to working-age adults.Demographic and economic data were collected from potentially vulnerable households identified by community committees.These data were then used to rank households based on their level of destitution and community committees discussed and verified the list and identified the 10% most incapacitated households.This method was designed to be simple and to target economically vulnerable households and/or those suffering from the demographic consequences of the HIV epidemic.Attempts to rigorously evaluate these targeting methods, in the context of cash transfer programs in sub-Saharan Africa, have been limited.A study in Zambia found that targeted households were more likely to be elderly or single-headed or to contain orphaned children or disabled members & German Technology Cooperation, 2007).A study from Malawi found that targeted households were more likely to be caring for orphaned children or someone sick with HIV or TB.However, it remains to be established whether census-based or community-based participatory methods perform better with respect to reaching the most vulnerable children.There are also questions pertaining to the appropriateness and accountability of cash transfers to beneficiaries and their wider community.To date, little has been done to incorporate and bring forward the perspectives of beneficiaries, let alone report on their experiences of engaging with cash transfers.A recent report, reviewing the experiences of beneficiaries and implementing stakeholders of five major unconditional cash transfer programs in sub-Saharan Africa, identified a need to promote community participation in poverty alleviation programs in order to secure greater accountability and program responsiveness to local needs and program shortcomings.This paper contributes directly to policy and program recommendations on community participation in the targeting of cash transfers.From 2009 to 2011, we conducted a community-randomized controlled trial of a cash transfer program for orphaned and other vulnerable children in Manicaland, eastern Zimbabwe.The program was funded by the Program of Support for the Zimbabwe National Action Plan for Orphans and Vulnerable Children.We investigated the effects, on school attendance and the uptake of child vaccinations and the uptake of birth registration, of a conditional cash transfer program and an unconditional cash transfer program.Every two months, beneficiary households received US$18 plus US$4 per child in the household up to a maximum of three children.We did not use an experimental design to compare different targeting methods.However, we used a combination of survey-based and community participatory methods to target vulnerable households caring for children.This provided us with an opportunity to compare census and community derived targeting information.In the baseline survey for this trial, we collected data on household-level targeting information and child development indicators.As part of the evaluation of the cash transfer program, we also investigated community responses to the program through focus group discussions and key informant interviews with program beneficiaries, those delivering the program, and other community members.In this paper, we investigate and compare, from several perspectives, the success of community- and census-based targeting methods for cash transfer programs for vulnerable children.To determine whether our census- and community-based targeting methods successfully enumerated all households in the study areas and whether they identified the same households as vulnerable, we compared household eligibility data collected in the baseline census with eligibility data collected through community participatory methods.We then compared the effectiveness, coverage and efficiency of census- and community-based methods in reaching children with poor developmental indicators.Finally, in light of these findings, we use qualitative data to explore community perspectives on the benefits and challenges of involving community members in the selection of cash transfer beneficiaries.Manicaland province is located in eastern Zimbabwe, on the border with Mozambique.Many households in the region make their living from agriculture, both subsistence agriculture and in large scale commercial tea and tobacco estates.From 1999, Zimbabwe experienced severe economic decline, with record levels of hyperinflation that peaked around 2008 and then stabilized in 2009.In 1998–2000, HIV prevalence in Manicaland was 25.3% in women and 18.8% in men aged 15–49 years.By 2006–2008, the prevalence had fallen to 18.7% in women and 12.5% in men.Orphan prevalence is high in the region: 20.8% of children aged 0–14 years had lost at least one parent in 2003–2005.The Manicaland Cash Transfer trial for OVC began in July 2009 in 30 communities with an average of around 400 households in each community.The communities comprized four socio-economic strata—small towns, roadside settlements, subsistence farming areas and large-scale agricultural estates.The cash transfer programs were designed to support children in households that had been affected by extreme poverty and/or the severe demographic impacts of the HIV epidemic.An initial feasibility study was conducted to identify important indicators of household vulnerability, including a vulnerability mapping exercise based on national data on vulnerable children and discussions with community members and other stakeholders.It was decided to target all children within vulnerable households to avoid conflicts that could arise if specific children within households were singled out for assistance.Only households caring for children were eligible for the program.Households were eligible for the cash transfer program if they cared for at least one child aged less than 18 years, were not in the richest 20% of households and met at least one of the following criteria: was in the poorest 20% of all households, cared for orphans, had a household member with a chronic illness or disability, or was a child headed household.Data on household eligibility were collected in a baseline household census.Lists of all households in the communities were compiled from lists of households that had ever been enumerated in an on-going cohort study in the area.This cohort study had performed a census in the area every two or three years since 1998.New households were added to the list as they were encountered during the survey.Local guides from each community asked representatives from the households in their area to convene at a central meeting point on a specific day.Each central meeting point was visited on three different days.Trained research assistants conducted interviews, in the local language Shona, with the most senior available member of each household.To identify the poorest 20% of households, data were collected on household assets—source of drinking water, type of toilet facility, type of house, type of floor in the main dwelling, ownership of a radio, a television, a motorbike or a car and whether or not the household had its own electricity.The household asset data were used to create a wealth index for all households in the study using a simple summed score of asset ownership.The households were then ranked for each community, based on this index and the poorest 20% were identified.The summed score index was developed and validated using data collected previously in Manicaland.Data on household socio-demographic eligibility criteria were also collected.We used the following definition of chronic illness: very sick for at least 3 months during the past 12 months, where “very sick” was defined as being too sick to work or do normal activities around the house.We also asked whether any household members had any form of disability.Children were defined as orphans if either of their parents were deceased.Following the census, lists of all households in the study clusters, along with their status with respect to the various eligibility criteria were prepared and passed to a local NGO who undertook a community-based targeting process.Small groups of community leaders, including village chiefs, village heads, councillors and other representatives, nominated by the community during sensitization meetings performed a participatory wealth ranking procedure.The groups, led by the local NGO, were asked to define characteristics of “poorest”, “poor”, “average”, “less poor” and “least poor” households.Using these characteristics as a guide, the groups were then asked to rank the households on the census lists by assigning each household to one of the five categories listed above.Equal numbers of households were intended to be assigned to each category so that the poorest 20% could be identified.Larger community meetings were also held to verify the accuracy of the household socio-demographic eligibility data.The members of these groups were familiar with the households in their area.During the trial, eligible households were identified using the survey and the community-based participatory methods.A household had to be identified as eligible by both the census- and community-based targeting methods in order to be enrolled in the cash transfer program.For the ordered categorical wealth data, the maximum value of the weighted kappa statistic is affected by the distribution of households across different wealth categories—the asset-based wealth index and the PWR procedure may rank households in roughly the same order but if the communities did not assign equal numbers of households to each category during the PWR, then the kappa statistic would be reduced when comparing this distribution with quintiles calculated using the wealth index.As a comparison to the weighted kappa statistic based on comparing the asset-based wealth quintiles with the wealth distribution produced by the PWR procedure, we calculated the maximum possible weighted kappa statistic that could be produced by this method if the households were ranked in the same order by the two procedures, but differed with respect to the size of the wealth categories as observed.Secondly, we calculated a weighted kappa statistic comparing the wealth categories produced by the PWR procedure with those produced when the wealth index ranked household list was divided into categories with the same distribution as those produced by the PWR procedure.In the baseline census, data were also collected on the primary outcome indicators for the trial: birth registration and vaccination status among children aged 0–4 years and school attendance among children aged 6–17 years.These primary indicators were selected to represent various types of health, education, and social vulnerability among children across a range of ages.We defined four poor child-level outcomes: incomplete vaccination record among children 0–4 years, lack of a birth certificate among children aged 0–4 years, non-enrollment in school or less than 80% attendance over the last 20 school days among children aged 6–12 years and children aged 13–17 years.We used these data to compare the effectiveness and efficiency—with respect to reaching children with poor health, education, and social outcomes—of targeting the poorest households identified by the PWR procedure and the poorest 20% of households based on the asset-based wealth index.To account for differences in the proportions of children assigned to the “poorest” category by the PWR procedure and the asset-based wealth index quintiles, we also defined a targeting method based on the asset-based wealth index ranking that identified the same proportion of “poorest” children as the PWR procedure.We also defined two targeting methods that combined the PWR procedure and the asset-based index: an “inclusive” method where any child considered to be in the “poorest” category based on either the PWR procedure or the asset-based wealth quintiles would be targeted and an “exclusive” method where a child would need to be considered to be in the “poorest” category by both methods to be targeted.The effectiveness of each of the five poverty-based targeting methods at reaching children with poor outcomes was compared using age- and sex-adjusted logistic regression models to estimate the odds-ratio that targeted households contained children with poor outcomes relative to households not targeted by each method.To compare the extent to which children with poor outcomes were “missed” by each method, we present the proportion of children with poor outcomes that were reached and compare this with the proportion of all children that are reached by each method.We compared the efficiency of the methods by calculating the number of children with each of the poor outcomes that were reached per child targeted.Socio-demographic information on child headed households and the orphan status, chronic illness status, and disability status of household members were not collected independently in the household census and the community-based participatory data collection: the community groups verified the data collected in the census rather than generating their own data.We therefore have not compared the effectiveness of census-based and community-based socio-demographic targeting of vulnerable children.Comparisons between socio-demographic and poverty-based targeting methods have been presented elsewhere.To explore community perceptions of the cash transfer program, including the procedures used to identify and select eligible households, we conducted 35 individual interviews and 3 focus group discussions.In an effort to gather a wide range of perspectives, we invited community members with different types of involvement in the cash transfer program, including: 7 cash transfer beneficiaries; 8 conditional cash transfer beneficiaries; 5 non-beneficiaries; and 15 key informants who contributed to the implementation of the program within the communities.The participants were randomly selected from a list of program stakeholders and recruited by Shona-speaking researchers from the Biomedical Research and Training Institute in consultation with local community guides.The qualitative work was not conducted at the same time as the community-based participatory targeting procedures.With the exception of one individual interview, which was conducted in English, all interviews were conducted in the local Shona language, using a topic guide developed specifically to explore their perspectives on the cash transfer program.The interview guides covered topics such as the role of community members in the implementation of the program, procedures and performance of targeting methods, changes to community life as a result of the cash transfers program, the impact of cash transfers on the benefitting households, compliance, and monitoring procedures and challenges as well as recommendations for future programs.The individual interviews lasted an average of 40 min, while the group interviews took an average of 94 min.The interviews were translated and transcribed into English and imported into Atlas.ti v6.1, a qualitative software package, for coding and examination.This involved an iterative process allowing for both a priori reasoning and surprises.This first stage of the analysis generated a total of 90 codes.In line with Attride-Stirling’s thematic network analysis, codes were clustered together into more interpretative organizing themes.As we did not seek to report on all the themes emerging from our qualitative analysis in this paper, but to examine community perspectives on the interface between their involvement and support of the program, we report on three organizing themes, which comprise of 19 codes, or basic themes, that have direct relevance to this topic and contextualize our quantitative findings.Table 2 illustrates the breadth of related themes emerging from this study, giving detail to: how the program worked within local structures; the community committees active involvement in implementation; perceived benefits of participatory wealth ranking; community verification; perceptions of fair selection; transparency; the limits of community involvement.We first present our quantitative findings, comparing the households identified by the census-based and community-based targeting methods and investigating the relative effectiveness, coverage, and efficiency of these targeting methods.We then supplement and contextualize these findings with community members’ perspectives of the different targeting methods, highlighting additional benefits and challenges of involving community members in the selection of cash transfer beneficiaries.A total of 16,887 households were identified as having been enumerated in at least one census since 1998 of which 11,820 households completed a household census as part of the cash transfer study.Of those who did not complete a census, only 10 refused to be interviewed.The rest had either relocated or their dwelling was empty or no longer existed.For 863 missing households, the reason they were not interviewed was unknown.Of those households interviewed, 10,538 cared for at least one child under 18 years old.The coverage of the community-based participatory wealth ranking and of the socio-demographic verification was less complete than the coverage of the household census—2,455 of households caring for children that completed a census were missing PWR data and 899 were missing community verification of socio-demographic characteristics.Using the census data, we compared households missing data with households that were not missing data.Households missing PWR data were significantly less likely to have poor socio-demographic vulnerability characteristics.Few significant differences were found between households missing community-based socio-demographic verification data and households not missing these data: households missing data were less likely to be female-headed and children 6–12 years living in households with missing data were more likely to have poor school attendance.Panels A and B of Figure 1 show the distributions of household wealth using information on household assets from the population census and information from the PWR procedure.Using the asset-based wealth index, we divided the population roughly into equal sized wealth quintiles.The slight variation in the size of the categories, including 18% of households being in the poorest category, is due to the fact that many households had exactly equal scores in the wealth index, which resulted in category cut-off points slightly above or below the quintile cut-off points.Using the categories produced during the PWR procedure, it is clear that households were not evenly distributed across the five wealth categories by the community groups: 28% were assigned to the poorest wealth category and very few households were assigned to the two least poor categories.Instructions to the community groups to assign households evenly across the five wealth categories were not well followed.A poor level of agreement was found between the PWR categories of household wealth and the asset-based index quintiles—the weighted kappa statistic was 0.28, although the maximum possible weighted kappa value, assuming both procedures ranked households in the same order but differed with respect to the sizes of the wealth categories, was 0.54.When we compared the level of agreement between the PWR categorization and the asset-based wealth index categories produced to match the size of the PWR categories, the level of agreement remained low.Panel C of Figure 1 shows the breakdown of the households into wealth categories based on the PWR procedure for each quintile of the asset-based wealth index.Households in the poorest quintile of the asset-based wealth index had the highest proportion of households assigned to the poorest category by the PWR procedure.A somewhat similar pattern was found across all quintiles—a large proportion of households in each asset-based wealth index quintile were assigned to the analogous PWR category.However, across all the wealth quintiles, a substantial proportion of households were assigned to the poor or poorest categories by the PWR procedure, including among households in the less poor and least poor asset-based wealth index quintiles.In panel D of Figure 1, the distribution of households according to the asset-based wealth index is shown for each wealth category from the PWR procedure.Those households categorized in the poor or poorest categories by the PWR procedure were much more likely to be in the poorest two quintiles of the asset-based wealth index.Among the households categorized as better-off by the PWR procedure, there were high proportions in the richer quintiles of the wealth index.Very few households categorized as less poor or least poor by the PWR were in the poor or poorest quintiles of the wealth index, although a relatively large proportion of households categorized as poorest or poor by the PWR were in the better-off two quintiles of the wealth index.Table 4 shows the proportion of households with each of the socio-demographic characteristics of vulnerability according to the household census and the community verification exercise and the kappa statistics measuring the agreement between these two information sources.For most characteristics, there is good agreement between the census data and the community verification exercise, with the strongest agreement found for identification of paternal orphans in the household.A low kappa statistic was found for agreement in the identification of child-headed households.Child-headed households were found to be very rare in the census data and the community verification data.Among households with non-matching data socio-demographic data, census based information was significantly more likely to indicate a chronically-ill resident, a disabled resident, or a paternal or double orphaned household member than the community verification exercise.There were no significant differences, among households with non-matching data, between the census- and community-based information for reporting of maternal orphans or child-headed households.A key theme that emerged from our qualitative interviews was the added value of community involvement—manifested through the PWR procedure and community verification process—in facilitating program ownership.One community leader, when discussing pathways to impact, attributed the sense of the success of the program to “the local way of doing things”:“The programme was successful because it valued people’s input… it drew from the local way of doing things.Above everything I also saw that you know that local leaders are important in your activities and you always take your time to explain your projects to them.,Community leader,The links between program success and the program’s recognition of local structures and “way of doing things” were articulated in many different ways.For example, there was a sense that the PWR procedure gave the community members an opportunity to consider a whole variety of locally relevant information that they believed determined the vulnerability of children.For example, in answering the question: “How did local people define eligible households?,one committee member said:“People were looking at things like whether the household lives a better life and whether any family member is gainfully employed and bringing in meaningful income.You see there are people who can’t even afford fees for their children.So we were also looking at whether the household has any orphans they were taking care of, and also whether they were struggling to make ends meet.Of course, the issue of ownership of livestock was looked at but generally many people do not have many cows, even some well-to-do families here might not even own any livestock, but people know each other’s living standards.There are families here whose members are handicapped such that they can’t do the daily duties like many of us here.Just by mentioning the name people will tell you whether that household is deserving or undeserving.,Committee member,But it was not just the fact that the PWR procedure allowed for local information to guide the selection of beneficiaries that made it a favorable targeting method.In a response to a question on the benefits of having community meetings to rank vulnerable households, one community member said that the PWR procedure encouraged widespread community involvement, which, in turn, contributed to transparency and the identification of the most vulnerable children.“The advantage of that process is that everyone will be present at the meeting and they will be hearing the selection process.They will know how the households have been ranked and confirm that the household deserve to be in the category they have been placed.,Community member,Both of these observations are supported by a community leader, who further adds that community members know each other, and their situations, better than any outsider and are thus in an ideal position to identify the most vulnerable children.He also adds that community involvement fosters ownership and an interest to see the objectives of the program being met.“I also think people know each other better than anyone from outside.. they actually know who should benefit first.This kind of selecting avoids the possibility of undeserving households benefiting.There is also another advantage whereby the community will assume the responsibility of making sure the program is successful because they won’t have anyone except themselves to blame if the program fails to achieve desired goals.,Community leader,Another point raised by a program implementer, was how the involvement of community members contributed to a sense of transparency and fairness, reducing community conflict and the probability of anyone feeling jealous:“We know each other better than any outsider, so I felt having the PWR was really a good way of involving the community and making sure there is transparency because at the end of the day this was money and everyone wanted it, but when the most deserving get it no one would cry foul, or accuse any from the implementing agencies to have done anything wrong.I think if they had just sat somewhere and came up with names of beneficiaries, people would have complained, but now in this case no one can complain as it was for the community to decide who should get preference.,Program implementer,Not only did community involvement contribute to program buy-in and acceptance from the community at large, beneficiaries also spoke about the changes they had witnessed in their community, typically referring to it as “united” or “strengthened”:“This program brought unity to people in our community.The community has been strengthened.,Adult CT beneficiary,These local observations suggest that involving community members in the targeting process encouraged them to have a stake in the program—working toward its success—presenting additional benefits to the PWC procedure and community verification process.Nonetheless, while community involvement was generally seen as key to the success of the program, some community members felt there should be a limit to what responsibilities should be passed onto them, arguing that they should not be doing the job of the implementing NGO.While they saw community involvement as positive and a prerequisite, they felt the responsibility should not fall on them—highlighting an important challenge in finding a balance between top-down and bottom-up program implementation that is acceptable for everyone.“We don’t want the involvement of the local people.We want you to do your job.We don’t want the responsibility to fall on local people.We want you working with community members and working with us as one team.,Community member,Furthermore, community participation is notoriously challenging.Any community is characterized by people with competing interests and power relations.Favoritism, nepotism, and lying were mentioned as a challenge to community involvement.One respondent said that “there are some people who ask: “why was this person selected and not me?,It must be down to favoritism and nepotism.,Although it is difficult to ascertain whether such a claim is down to jealousy or real observations, it represents a concern.In discussions on the challenges of community involvement, it emerged that some community members had tried to manipulate the eligibility criteria for the benefit of themselves or others, but that the transparent process of involving the community had minimized such attempts.“There are not many non-deserving children who got into the program.Some people tried to cheat by stating that they were orphans and include their children as orphans.They will end up as 4 people.But these problems were resolved by asking the selected households to bring their children’s birth certificates and parent’s death certificates to verify the orphan status of the child.,Community member,Overall, the qualitative data highlighted the importance of involving community members in targeting cash transfer programs in order to capitalize on existing local resources, enhance ownership of the program, improve transparency and perceptions of fairness, and reduce potential for jealousy and conflict.The benefits of community involvement in the targeting of cash transfer programs include enhanced community ownership, increased transparency, and reduced potential for conflict and jealousy.Making use of existing local resources also reduced the need to build parallel structures.However, our quantitative analysis showed that there was poor agreement between the community-based wealth ranking procedure and an asset-based wealth index in terms of describing the distribution of wealth in Manicaland and identifying the poorest households.A similar result was found when comparing PWR data with survey data collected in rural South Africa.We did not find that the poor agreement was attributable to the communities’ failure to assign equal numbers of households to each of the wealth categories, as was intended.Community groups undertaking PWR procedures may take into consideration factors that are not directly related to household wealth when ranking households.This could explain to some extent the poor agreement between the PWR ranking and the asset-based index.However, we did not collect quantitative or qualitative data, other than the assignment of households to the five poverty categories, during our PWR procedure.Thus we are unable to investigate further the information that was used by communities to rank the households.In the South African study, community groups did discuss non-wealth related characteristics, such as the presence of orphaned children in households, when undertaking the PWR.Investigating possible reasons for the lack of agreement between PWR and asset-based wealth indices is an important area for future work.It is not clear if the reason for the disproportionate assignment of households to the five wealth categories was because the community representatives were inadequately trained in the methodology or whether they found the method unacceptable.It may be the case that they adapted the PWR procedure in accordance with the perceived needs of the community.For example, the PWR-based wealth distribution may more accurately represent the distribution of wealth within the population than wealth quintiles—it is likely that many households in the area are extremely poor and few are extremely rich.The community groups were instructed to discuss characteristics of households in different wealth categories and then use these characteristics to rank the households and assign equal numbers of households to each of the five wealth categories.If large numbers of households had characteristics that the communities associated with the poorer households, this may explain the observed wealth distribution produced by the PWR procedure, with households in the richer wealth index quintiles being categorized in the poorer PWR categories.Many households were missing PWR and socio-demographic verification data from the community.It is not clear exactly why this happened.There was some evidence that those households that were excluded from the PWR procedure were significantly less likely to have vulnerable socio-demographic characteristics or to be caring for children with poor outcomes.This suggests that the community groups were excluding some better-off households from the ranking procedure.This may have been because such households were less well-known within the communities—perhaps because they were less involved in community activities.It is also possible that the community groups preferred not to consider ranking households they perceived were not in need of assistance.Households excluded from the PWR procedure were more likely to be caring for children with poor school attendance.In this population, recent migration has been found to be associated with school drop-out among vulnerable children, which suggests that more transient households may also be excluded from the PWR procedure.However, it should also be noted that many households were also missing data from the community verification exercise and there was little evidence of a systematic bias in the exclusion of households from this process.This suggests that there may have been a more general problem with the application of the community-based targeting methods, perhaps resulting from poor training or the imposition of geographical community boundaries used in the community-randomized trial, which may have included households that were unfamiliar to the community leaders involved in the PWR and verification exercises.There was better agreement between the census-based information about household socio-demographic characteristics and the information from the community verification exercise.This was to be expected as the community representatives were asked to validate the information collected in the census rather than to provide information independently of the census.The tendency to over-report chronically-ill, disabled, paternally orphaned, and double orphaned household members in the census, relative to the community verification, may have occurred because households were aware that the survey was linked to the cash transfer program and may have over-stated the frequency of these household members in the census in order to benefit from the program.This suggests that the community verification exercise was effective at reducing inclusion errors relative to the census-based method alone.However, it is also possible that households hide illness, disability, and orphanhood for fear of stigma and community representatives may not always be aware of the status of these household members.The asset-based wealth index and the PWR method showed moderate success at targeting vulnerable children: both methods reached children who were more likely to be suffering from poor educational and social outcomes.The asset-based wealth index method was more effective at targeting children with poor outcomes than the PWR procedure.However, both methods failed to target a large proportion of children with poor outcomes and the efficiency of the methods was generally low—few children with poor outcomes were reached per child targeted.The asset-based wealth index method was more efficient than the PWR procedure at reaching children with poor outcomes and the efficiency did not decline as a greater proportion of the poorest households were targeted.We found that census-based wealth indices were the most effective and efficient way of targeting children with poor outcomes.However, all five of the methods investigated were relatively inefficient and failed to reach a large proportion of vulnerable children.In terms of effectiveness and efficiency, combining the census-based wealth index method with the PWR procedure offered few improvements over the asset-based wealth index alone.It may be that household-based targeting using either quantitative or community-defined measures of wealth or poverty is not a particularly successful way of reaching children with poor outcomes.Alternative methods could involve directly targeting children with poor outcomes or perhaps reaching children through alternative channels e.g., in school or through community outreach work.Similarly, it may be that targeting the poorest children excludes other types of vulnerable children e.g., orphans or those caring for sick adults.Previous work using the Manicaland Cash Transfer Trial census data found that asset-based wealth index was more effective and efficient at targeting children with poor outcomes compared with targeting based on household-level socio-demographic characteristics such as caring for orphaned, chronically ill, or disabled household members.The advantages of the PWR procedure were highlighted by our qualitative study: the method allows community involvement in the selection process and thereby increases community ownership and acceptance of poverty alleviation programs.The increased transparency and perceived fairness of the method can reduce the potential for conflict within communities, especially in a context where the majority of households are struggling to meet their basic needs.Previous studies have found that the involvement of communities in decisions about the distribution of local resources to improve child wellbeing can strengthen community responses to the needs of vulnerable children.PWR is also cheap and can be carried out relatively quickly, although our method for carrying out the census—asking members to convene at a central meeting point within the community was much quicker than traditional methods where each household is visited by a research assistant.Thus the PWR method may be more cost-effective, despite being somewhat less effective and efficient than the census at identifying the poorest households.Comparing the cost-effectiveness of different types of targeting method is an important area for future work.In light of the positive community response to their involvement in the targeting process, and the program more generally, and the possible benefits in terms of reducing inclusion errors, it is clear that providing community members with opportunities to participate in poverty alleviation programs have the potential to open up for possibilities that have implications for the success and sustainability of cash transfer programs.Despite the small number of in-depth interviews and focus group discussions that were conducted, the added value of these possibilities needs to be recognized and should be explored further in future programs adopting community-based targeting methods.This argument resonates with the growing interest and recognition of the community response to HIV.Nonetheless, it is also clear that both the community-based and census-based methods had serious difficulty reaching vulnerable children, with the asset-based wealth index offering some advantages in terms of effectiveness and efficiency over the PWR procedure.Given the frequency of the use of socio-demographic and wealth data, derived from household surveys and community committees, to target cash transfer programs & German Technology Cooperation, 2007; Robertson et al., 2013; Schubert & Huijbregts, 2006) and the increasing popularity of wealth ranking procedures, our results are of some concern.Further work is required to improve methods for targeting social welfare interventions to the poorest households and the most vulnerable children.Children living in households targeted by the PWR procedure, the asset-based wealth index, and the combined targeting methods, both inclusive and exclusive, were significantly more likely to have poor school attendance and to lack a birth certificate than non-targeted children.These associations were stronger for the asset-based wealth index methods than for the PWR procedure.The strongest associations were found for the asset-based wealth index targeting the poorest 28% of households.When the two targeting methods were combined inclusively and exclusively, the strengths of the associations were midway between the strengths of the asset-based wealth index associations and the PWR procedure associations.In Table 5 and Figure 2, we compare the proportion of children with each poor outcome that are reached by each of the targeting methods with the proportion of children in the general population that are reached.Table 5, Section 2 shows that a large proportion of children with poor health, education, and social outcomes are missed by all five targeting methods.Figure 2 compares the efficiency of asset-based wealth index and the PWR method at targeting children with each poor outcome by showing the percentage of children with poor outcomes who are reached as the percentage of all children targeted are increased.For birth registration and primary and secondary school attendance, both methods perform slightly better than by chance in reaching children with poor outcomes.The asset-based wealth index performs slightly better than the PWR method for all three of these outcomes.Neither method performs better than chance with respect to reaching children with incomplete vaccination records.Table 5, Section 3 shows that the efficiency of all five targeting methods was poor—the number of children with poor outcomes reached per child targeted was low for each of the four child vulnerability indicators.All methods were most efficient at targeting children aged 0–4 years who lacked a birth certificate.The asset-based wealth index methods and the exclusive combination method were slightly more efficient than the PWR procedure and the inclusive combination method for all indicators.As the proportion of children reached by the asset-based wealth index increased from 18% to 28%, the efficiency of the method did not change significantly.
We used baseline data, collected in July-September 2009, from a randomized controlled trial of a cash transfer program for vulnerable children in eastern Zimbabwe to investigate the effectiveness, coverage, and efficiency of census- and community-based targeting methods for reaching vulnerable children. Focus group discussions and in-depth interviews with beneficiaries and other stakeholders were used to explore community perspectives on targeting. Community members reported that their participation improved ownership and reduced conflict and jealousy. However, all the methods failed to target a large proportion of vulnerable children and there was poor agreement between the community- and census-based methods. © 2013 The Authors.
437
Information content of household-stratified epidemics
Mathematical models have been identified as important tools in the description of the transmission of infections as well as the evaluation of control strategies.Early infection models frequently assumed that the population mixed homogeneously with frequency- or density-dependent transmission.The homogeneous-mixing assumption can be extended relatively straightforwardly to allow for host heterogeneities such as stratification by age.Further extensions involve dividing the population into activity-based risk groups or households.For a number of infections requiring close contacts, transmission within the household has been identified as an important component of spread due to the greater intimacy and the stable nature of the contacts compared to contacts outside the households.This has led to the development of household driven dynamic models for the exploration of targeted vaccination programmes.Following their development and more recent usage, these models require parameterization by fitting to household-stratified infection data, typically on final outcomes.Advances in laboratory techniques mean that more detailed, temporal, data have increasingly become available although these remain costly and time consuming to collect, motivating the question of whether the design of these studies can be optimised.In order to design a study, choices have to be made on overall protocol, the number of participants, duration, the number of time points to sample, the sensitivity and specificity of tests, and many other questions – all of which should be guided by both knowledge of the system to be measured and resource constraints.This paper addresses the question of designing studies to collect household epidemic data in order to maximize the information available to calibrate the parameters of a household stratified epidemic model given a fixed budget.Household stratified data collection usually involves enrolling households and prospectively following them up to collect samples for pathogen identification.In designing these studies, two main decisions need to be made, with the first being the number of households to enroll and the second being the frequency of data collection or the number of times to collect samples from individuals.Previous work done by Klick et al. evaluated study designs that make most cost-effective use of resources for accurately and robustly estimating the secondary attack proportion from a set of households in a transmission study and for maximising statistical power.These studies were carried out within the framework of classical optimal design of experiments and were not concerned with estimation of the parameters of a fully mechanistic, temporal, non-linear epidemic model, instead focusing on careful estimation of a static proportion of secondary infections.On the other hand, work by Cook et al. considered optimisation of the exact set of time points at which the SI epidemic model is observed, but restricted to one population rather than a population of households.Here, we provide for the first time a systematic method to optimise information content of household-stratified studies of infection over time at fixed cost, which involves the evaluation of an optimal trade-off between the sample size and the intensity of follow-up.Since the models involved do not have simple likelihood functions, we adopt a Bayesian experimental design framework which enables, amongst other things, the use of a computationally intensive Markov chain Monte Carlo methodology to deal with arbitrary likelihoods.Lindley presents a decision theoretic approach to experimental design, arguing that a good way to design experiments is to specify a utility function which should reflect the purpose of the experiment.Since the main goal of the current work involves making inference on model parameters, we have used a utility function based on Shannon information, a popular choice in Bayesian optimal experimental design that captures many of our intuitions about information and which we discuss in more depth in the Methods section below.Our design choice is, overall, regarded as a decision problem selecting the design that maximises the expected utility.Competing study designs will be evaluated under two protocols: longitudinal/cross-sectional and cohort.Under the cross-sectional model, the assumption is that the households are randomly selected at every time-point the samples need to be taken, while the cohort model assumes that the same households are followed and sampled throughout the study period.We note that the estimates of information content we provide cannot be used to compare these two protocols.In practice, however, we expect that considerations such as gaining informed consent, recruitment and retention of participants and other practical considerations will take precedence in determining the overall study protocol.This may in fact lead to a hybrid design where new households are chosen at each time-point from within a larger pre-specified grouping – our cross-sectional design emerges from such a hybrid in the limit of a large grouping, and the cohort in the limit of a small grouping – with an example of such an approach being the virological confirmation of selected www.flusurvey.org.uk participants.In the next sections, we describe the household model, the optimal design formulation including the utility function, the results and a general discussion.We consider the realistic scenario in which the number of households in the population is large, so the overall epidemic is well approximated by its deterministic limit.We also assume that the number of households as a whole is much larger than the number of experimentally sampled households, so that the observed state of the sampled households bears negligible impact on the epidemic dynamics in the rest of the population.Fig. 2 shows this in practice, with marginal posteriors on each of the epidemiological parameters becoming ‘narrower’ as observations are added, at comparable levels of information per observation.In the collection of household stratified epidemic data, we will consider two study protocols.In the first, households to be enrolled are randomly chosen at each time point and in the second, households are randomly chosen at the beginning of the study and prospectively followed for the duration of the study i.e. until the end of the epidemic.The parameters to be inferred are the within-household transmission, τ, community transmission, β, and rate of recovery from infection, γ, which are shown in Fig. 1A, B, C respectively.We will also make the simplifying assumption that the population is made up of households each with a fixed number of members, n; this means that we only need to keep track of the proportion of households with s susceptibles and i infectives, Ps,i, since the number of recovered individuals will simply be r = n − s − i.For both the cross-sectional and cohort designs, it is important to note that we assume that the initial condition of the system at time zero is known and that the final time point occurs after the epidemic has finished and is always recorded as part of the sampling scheme as shown in Fig. 1F.A visualisation of the structure of simulated cohort data is given as Supplementary Fig. S1.For both the cross-sectional and cohort designs, it is important to note that we assume that the initial condition of the system at time zero is known and that the final time point occurs after the epidemic has finished and is always recorded as part of the sampling scheme as shown in Fig. 1F.A visualisation of the structure of simulated cohort data is given as Supplementary Fig. S1.For all the simulations, ϵ, which is the proportion of households in the population at the beginning of the study with one infected and all the other household members susceptible is taken to be 10−3; this quantity is not of biological interest and therefore and we assume that it is known.This is consistent with a naive infection being introduced in the population or an infection whose immunity wanes over time and is chosen to be small enough not to deplete the susceptible population significantly, but to be large enough that we need not consider stochastic effects at the population level.To examine whether the results are robust to changes in the model parameters, we have, for the designs with 100 data points, systematically explored the values of within and between household transmission and the number of individuals in a household as shown in Table 1.Each of the letters in the first column refers to the corresponding subplot in Figs. 6 and 7.For each parameter set explored, we generated 20 replicates and plotted the resulting information per datapoint.To obtain samples from the model posterior distribution, we use Markov Chain Monte Carlo with Random-Walk Metropolis Hastings sampling, independent Gaussian proposal densities tuned by hand and a starting point at the ‘true’ parameters β*, τ*, γ*.Burn-in time was 103 and samples were thinned by a factor of 10.Mixing was assessed via trace plots and the total number of samples visualised is 103.It is worth noting that our approach is designed to be capable of adaptation to a more fully Bayesian approach or even use within a frequentist framework.For the former, rather than use uninformative priors as we have done, informative priors could be used and the information gain calculated.For the latter, MCMC should be viewed as a versatile method of likelihood exploration for a complex model.Better intuition about optimality is drawn from Fig. 4 where we have re-run the analysis with different simulated datasets and recorded the amount of information from each run.Fig. 4 shows the mean and the 95% CI of the information for all the replicates and for each of the designs.The number of replicates that we consider are between 10 and 100.Subplots A–C show the information for the designs with 10, 100 and 1000 data points respectively for the cross-sectional design while subplots D, E and F show the same for the cohort design.From this figure, we can observe that designs with more time points contain more information per observation in the cross-sectional design.However, for the cohort study, there exists an intermediate optimum in the study designs giving the most information.As can be seen in subplots E and F in Fig. 4, the optimal designs are and for the designs with 100 and 1000 data points respectively.As for the designs with 10 data points, there is no evidence to distinguish them as their CIs overlap except for the design which contains very little information and therefore it is impossible to distinguish the other four designs meaningfully.Given that the measure of information about the three parameters is presented in the Shannon information, it is difficult to say how much information we gain for each parameter i.e. which parameters are well estimated depending on the study design.Fig. S4 in the supplementaty material shows the information per datapoint for each of the model parameters β, τ and γ.The left and right hand columns contain the simulations for the cross-sectional and the cohort designs respectively while the rows contain the information for the experiments yielding 10, 100 and 1000 data points respectively.In general, the variance in the amount of information for each of the parameters decreases as one increases the number of datapoints from 10 to 1000.Also, τ, which is the within household transmission, seems to be the parameter that is best estimated in almost all of the simulations.Comparing the two bottom subplots, we can also see that we gain more information about the three parameters as we increase the number of time points for the cohort study design compared to the cross sectional design.Given that the measure of information about the three parameters is presented in the Shannon information, it is difficult to say how much information we gain for each parameter i.e. which parameters are well estimated depending on the study design.Fig. S4 in the supplementaty material shows the information per datapoint for each of the model parameters β, τ and γ.The left and right hand columns contain the simulations for the cross-sectional and the cohort designs respectively while the rows contain the information for the experiments yielding 10, 100 and 1000 data points respectively.In general, the variance in the amount of information for each of the parameters decreases as one increases the number of datapoints from 10 to 1000.Also, τ, which is the within household transmission, seems to be the parameter that is best estimated in almost all of the simulations.Comparing the two bottom subplots, we can also see that we gain more information about the three parameters as we increase the number of time points for the cohort study design compared to the cross sectional design.Fig. 5 shows the simulations for 100 data points with the Flu-like parameters.As in the previous section, the optimal design for the cross-sectional study is given by the design with the highest frequency of data collection i.e.However, the cohort study suggests that the best design is often the one that selects the highest number of households,.However, we see that for some simulated datasets the presence of an intermediate optimum at is restored, highlighting that for complex systems e.g. with multiple transmission levels, non-linear relationships between the parameters and output interact with the random nature of the simulated data to produce results that are not trivial.We then explored the effect of varying, separately and in combination, both the transmission parameters and the number of individuals in a household.Figs. 6 and 7 show the results for the cross-sectional and the cohort studies respectively.The cross-sectional study seems robust to small changes in the parameters values such that the most information per data point is always given by the design with the highest number of time points.However, the cohort study seems to be sensitive to similar changes in the parameter values.For example, subplot D in Fig. 7, which corresponds to the scenarios with the highest community transmission, shows that there exist an intermediate optimal design at while all the other scenarios indicate that designs with more households will in general have more information per data point.It is interesting to note that some replicates will have a different optima compared to other replicates within the same set of simulations e.g. Fig. 7B.In this work, we have presented a general modelling framework that can be used to make inference on household model parameters based on household-stratified epidemic data.The epidemiological model used is the well studied SIR model and this can be easily modified to reflect the natural progression of any other infection or disease of interest.The basic idea behind this work is that inference of model parameters can be optimised or improved by selecting different study designs which are used to collect the data.Our results show that, for the cross-sectional study, information increases with an increase in the frequency of sampling i.e. the number of time points at which samples are collected.This is expected as the only within-household information one can collect will be somehow due to the overall epidemic since different households are sampled at each time point.However, for the cohort model, there often exists an intermediate optimum for the designs with 100 and 1000 data points meaning that the best inference of parameters will be the result of a trade-off between the number of households and the frequency of sample collection.In making a study design decision, the experimenter will need to take other factors about the system being studied into consideration.For example, it might be easier to implement the cohort study as there are fewer households that will need to provide consent for participation compared to the cross-sectional design.Also, if the sampling interval is very short, i.e. intense sampling, there may be limitations as to the timeframe required to obtain consent from a household and enroll it for participation in a study.We remark that it is difficult, given the work presented, to distinguish which of the two protocols is superior to the other.This is because we assume that all households are the same and therefore any heterogeneities that may arise from different households with different characteristics are not captured.It is worth noting that the time points selected for all of the designs always include the final time point and that this is assumed to occur after the epidemic has finished.From the early statistical work of Longini et al., and also more theoretical studies, we know that information about both the probability of household and community transmission can be estimated from having the final-size distribution of the number of household cases alone.It is therefore expected that the mass of the posterior distributions of β and τ will always concentrate around the baseline values that generated the data for all the designs.Another practical matter worth discussing is the optimal timing of the sampling.For example is it better to rush at the beginning of the epidemic or is it worth waiting and is it even necessary to sample over the entire epidemic.The optimal timing would be dependent on a number of factors among them being the serial interval of the infection which determines on average when a secondary case will start shedding the virus.Also, the probability of virologic confirmation since infection has been shown to be highly dependent on the time since infection at which the sample is taken consequently influencing the temporal structure of the design.Since we have not explicitly included these two factors in the model, it would be difficult to determine what the optimal timing strategy would be.However, it is clear that a design with more home visits will be less biased than that with less visits and this should come at a cost of greater variance of the parameter estimates due to a reduced sample size for a less intense sampling scheme.Despite the existence of literature on the optimal design of experiments, these methods are not routinely used in the design of studies of infectious disease transmission.This may have been in part due to limitations in computational power.However, with more computational resources available to researchers, we anticipate that these methods will become more commonly applied in the design of field studies in epidemiology.However, certain key questions will still need to be addressed.For example, despite the speed of modern computers, and the fact that our methods would make efficient use of multi-core machines, we were still constrained somewhat by numerical efficiency and future research could fruitfully consider both calculation of the likelihood in a more efficient manner, as well as improvements to the MCMC scheme.This computational cost has in particular limited the extent of the sensitivity and uncertainty analysis performed.Also, the range of ways in which a study can be designed will need to be taken into considerations.While our simulation-based framework offers a natural way of doing this, it presents a potential challenge in determining an appropriate utility function.The choice of the utility function is usually based on the objective of the experiment.According to Chaloner and Verdinelli, when inference about parameters is the main goal of the study, then Shannon information would be the best measure.However, Shannon information can also be used for prediction and in mixed utility functions that describe multiple simultaneous goals therefore making it quite robust to the objective of the study.It is, or course, possible that the results would change if a different measure was used but that would equally be a reflection of a different study objective.This work has also considered static designs where the experiment is fixed at the beginning of the study.An extension would be to consider the possibility of adaptive designs that change depending on the evolution of the system.This would be a useful feature but probably the most challenging to implement practically given that ethical approval needs to be sought each time the researcher proposes a change to the design.Despite the challenges above, the kind of studies defined in this work are becoming more common and therefore this work contributes to the discussion of how they should be designed in order to get the most information without collecting unnecessary data that can often be expensive obtain or cause unnecessary risk to participants as some of the specimen collection methods are highly invasive.The fully Bayesian adaptation of our methodology suggested above has utility in such a context as it offers a platform to incorporate what is already known from other experiments in the design process.The experimenter is encouraged to design a different utility function from the one adopted here in order to reflect their study objectives.
Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs.
438
Cost effective natural photo-sensitizer from upcycled jackfruit rags for dye sensitized solar cells
Requirement of cost effective and high performance energy harvesting technologies to meet the future energy demand urges researchers to explore multifarious functional materials for solar cell applications .Dye sensitized solar cells have been realized as potential alternate for many bulk and thin film based third generation photovoltaics due to the usage of low cost materials and simple fabrication processes .While silicon and other thin film based solar cell fabrication demands high vacuum and high temperature processing in controlled environments, DSSCs are fabricated via non-vacuum deposition techniques showing competitive photo-conversion efficiencies .Various functional materials such as fluorine doped tin oxide, titanium dioxide, photo-absorbing dyes, hole transporting electrolytes and counter electrodes are still under research for further development of hybrid photovoltaic technology .Photo-sensitizing organic dyes are important and influential components in DSSCs to determine the overall photovoltaic performance.Ruthenium, porphyrin and phthalocyanine based organic dyes have shown promising η values in DSSCs .Photo-sensitizing dyes play a critical role in DSSC performance in terms of light absorption, exciton generation, and electron injection into electron acceptors which determine short circuit current density and thus η in DSSCs .While ruthenium and porphyrin based dyes show an exceptional photovoltaic performance in DSSCs, various other routes have recently been explored to extract natural dyes .Sathyajothi et al. has recently reported that extracts from beetroot and henna have shown promising photo-absorption in the visible spectrum and thus yielded DSSCs with 1.3% and 1.08% efficiency, respectively .Natural dyes are relatively low cost materials due to the simple straightforward processing to extract the dyes from sources such as flowers and fruits .Natural dyes obtained from flowers, such as rose, lily.. and fruits, such as Fructus lycii.. have shown potential merits of considering natural resources to develop cost effective energy harvesting technologies .Albei natural dye extracts have generally shown relatively lower DSSC performance compared to ruthenium and porphyrin based dyes, but, in contrast, recently coumarin based natural dyes have shown η of 7.6% .It shows the possibility of developing high performance DSSCs via modified natural dyes.It is important to note that the pigments contained in the natural dyes are the sources to determine the active photo-absorption spectral window.The present research scenario points out the possibility of developing low cost photo-sensitizers from natural resources while the performance of the resulting solar cells is relatively lower.In general, the energy level alignment between the TiO2 and lowest unoccupied molecular orbital of the dye determines the efficient electron injection while the alignment between the highest occupied molecular orbital of the dye and the redox potential of the electrolyte influences the regeneration of the dye molecules.These two factors primarily stipulate the electron and hole transport in DSSCs.Additionally, the energy gap between HOMO and LUMO of the natural extract is an essential parameter which determines the spectral energy range in which the dye absorbs photons and this directly controls JSC of the resulting DSSCs.Design and development of novel photo-sensitizers for DSSCs are expected to lead to a successful establishment of cost effective photovoltaics.The present work examines a natural dye that has been derived from jackfruit rags, a least used and discarded component from jackfruits, and its application as a major photo-sensitizer in DSSCs.It is an attempt to establish a simple, cost effective and high throughput natural dye development process via upcycling the jackfruit waste for DSSC applications.Commercially available jackfruits were obtained and the fruits were cut, openned to separate the waste rags as a source material for the dye preparation.The separated rags were powdered and were suspended in 80% acidified methanol as a solvent at a concentration of 10% w/v.The mixture was heated at 50 °C for 5 h and then 100% methanol in a volume ratio of 1:2 was added.A supernatant was collected from the solution by centrifuging at 1000 rpm for 5 min at 4 °C after removing the solid fraction.The original volume of the material was 10 ml and the volume of supernatant was 3 ml.The supernatant was further processed using a centrifugal concentrator at 1725 rpm for 16 h at 35 °C.The resulting extract had a thick viscous consistency, devoid of methanol as a stock dye solution.The stock dye solution was further diluted to obtain the required concentrations for the study of the effect on photo-absorption.This method yielded 1.5 g powder.Fig. 1 shows schematic of the process flow followed in the upcycling process of jackfruit rags into dye and their use in preparation of photo-anodes for DSSCs.A stock dye solution of JDND was prepared as explained in the previous section and stock solutions with three different concentrations of JDND were prepared.A colloidal nanoparticle TiO2 layer was prepared using commercially available anatase TiO2 nanoparticles by the doctor blade method.The JDND dyes with the three different concentration values were used to sensitize the TiO2.Commercially available Iodolyte AN-50 was used as a hole transporting layer.A 50 nm thin Pt film was used as a counter electrode.The JDND coated TiO2 photo-anodes and the Pt coated counter electrodes were coupled and the electrolyte was injected between the two electrodes through a pre-made channel on a parafilm spacer used to couple the electrodes.Further, this study was performed using cobalt as an alternate electrolyte to check the compatibility of JDND with cobalt redox couple.No TiCl4 treatment and TiO2 blocking layer were used in this work.Morphology of the jackfruit rags and colloidal TiO2 nanoparticles containing samples were characterized in a scanning electron microscope using the JEOL-JSM-6490-LA.Energy dispersive X-ray analysis was performed with an accelerating voltage of 15 kV in the range of 0–10 keV.The jackfruit rags were prepared for SEM using 2% glutaraldehyde and subjected to dehydration by graded aqueous solutions of glycerol for 1 h.The rags were then cut into circumferential and longitudinal sections to obtain surface and cross-sectional views in SEM.Optical characteristics of the JDND and TiO2 were studied by Perkin Elmer Lambda-750 UV–visible spectrometer.The current density–voltage measurements of the DSSCs were performed under AM1.5 illumination level using a solar simulator and a digital source meter.Electrochemical impedance spectroscopic measurements were performed on the fabricated DSSCs in the Autolab electrochemical workstation under dark condition.Fig. 2 shows the photographic images of the stock dye solution extracted from the waste rags in jackfruit.The jackfruit rags are well known waste material and this study explores the possibility of upcycling the waste portion for the energy harvesting application by extracting the photo-sensitizer shown in Fig. 2.The pristine dye extracted from the rags was observed to be dark reddish-brown and this study selected three different concentration values for DSSC applications by diluting the stock solution.From the original stock solution shown in Fig. 2, 10 mg, 20 mg and 30 mg of the dye was separated and used to sensitize the colloidal TiO2 films for DSSCs and the three diluted concentrations of JDND are shown in Fig. 2.The images, and in Fig. 2 show the 10 mg, 20 mg and 30 mg, respectively.The three solutions with diluted concentrations were used to sensitize the colloidal TiO2 layers and the respective fabricated photo-anodes are shown in Fig. 2: i–iv.The digital images show that the JDND diffused into the TiO2 layer and the variation in concentration can also be asserted from the photographs shown.Fig. 3 shows the SEM images obtained from the surface of the jackfruit rags and from the cross-sectional views from the rags and obtained by breaking them across.The rags were observed like smooth fibrous stacks as shown in Fig. 3.The cross-sectional views of the as fresh collected rags and those cleaned with DI-water cleaned are shown in Fig. 3 and, respectively.The cross-sectional SEM images elucidate that the rags look like hollow fibers and they look better after the DI water cleaning.Fig. 3 and show the hollow fibers as bundles in the rags and these are randomly arranged.It is noticed that each hollow fiber in the bundle of rags is well separated by thin walls and the arrangement is few micron in size.Some of the regions are observed to have damaged hollow fibrous bundles and that are due to the manual cutting to acquire the cross-sectional images.Fig. 3 shows the surface morphology of the TiO2 nanoparticle layer used as an electron acceptor in the DSSCs presented in this work.This layer appears highly porous and the nanoparticles are randomly distributed as can be viewed in the agglomerated microscopic clusters formed by the TiO2 nanoparticles.The agglomerated nanoparticles as clusters in the surface are well connected to each other through which electron transport is established in the resulting DSSCs.Thus, the SEM surface morphology images reveal the presented material to be highly suitable for DSSC as an electron acceptor.Fig. 4 shows UV–vis optical absorption spectra taken on the JDND and the colloidal TiO2 nanoparticles on which the dye molecules were coated.Optical absorbance data of the three samples with different JDND concentrations are shown in Fig. 4.It is expected that absorbance will increase as the JDND concentration increases.As can be seen, JDND3 exhibits a stronger absorbance in the whole wavelength range of 350 nm–1000 nm and it is only due to the increased concentration.The wavelength range in which the dye is actively absorbing photons is the same for all three samples and the change in the quantity of absorbance corresponds to the change in the JDND concentration.The dominant absorbance characteristics of the three samples in the spectral range up to 1000 nm are further confirmed by the corresponding transmittance behavior in the same spectral window as shown in Fig. 4.As they show decreased optical absorbance starting around 700 nm, the transmittance increases at 700 nm which is in good agreement with their corresponding absorbance characteristics shown in Fig. 4.The present study explores the photo-absorbance ability of JDND and the application in DSSCs.Thus, it is important to compare the optical properties of JDND with those of TiO2 as they both make the photo-anodes for resulting DSSCs.Fig. 4 shows a comparison of the optical absorbance characteristics of JDND with TiO2 nanoparticles used as an electron acceptor in which the JDND was coated as a photo-sensitizer.It is shown in Fig. 4 that the optical absorbance of TiO2 in the spectral window of 350 nm–1000 nm is negligible while the absorbance of JDND is dominant.It is a major requirement for an electron acceptor and photo-sensitizing dye to have the optical compatibility in a particular spectral window in which the dye should exhibit a dominant absorbance while the electron acceptor shows a negligible one.Thus, the dye can generate excitons and inject electrons into the conduction band of the electron acceptor and the later transports the photo-generated electrons to the electrode via a diffusive transport process.Further, absorption coefficient values of JDND and TiO2 were calculated and Tauc plot was made to extract the values of the optical bandgap and the results are shown in Fig. 4.TiO2 shows the 3.1 eV optical bandgap while the JDND exhibits 1.1 eV and these values are in line with the optical absorbance characteristic spectra shown in Fig. 4 for JDND and TiO2.Fig. 5 shows the EDX analysis carried out on the JDND sample to examine the constituents with respect to their energy dependency.Fig. 5 shows the surface morphology in which the EDX scan was performed and- show the distribution of the major constituents carbon, oxygen, sodium, chlorine and potassium, respectively.Further all the elements were confirmed with respect to their energies as shown in Fig. 5 along with the quantification to estimate their mass and atomic % as shown in the inset table.Fig. 6 shows the J–V characteristics of the DSSCs utilizing JDND as a photo-sensitizer on the TiO2 nanoparticle layer.Three different concentrations of JDND of the dye were utilized to in the DSSCs.In this study, the iodide electrolyte was used as a hole transport material in the DSSCs.The photovoltaic performance metrics of the three DSSCs measured under the AM1.5 illumination level are listed in Table 1.The DSSC utilized JDND2 as a photo-sensitizer yielded values of JSC, VOC, FF and η in the order of 2.2 mA cm−2, 805 mV, 60.4%, and 1.1%, respectively.In general, photonic absorption of concentrated material will dominate those of materials with lower concentrations as it was shown in the UV–vis optical absorption studies presented already in Fig. 4.However, the J–V characteristics of the DSSC utilizing the concentration of 30 mg of JDND yielde a JSC value, which is 47% less than that obtained with the DSSC utilizing 20 mg of JDND.This can be attributed as due to the interface between TiO2/JDND.We believe that higher JDND concentration contributes to generating more excitons but it forms agglomeration on the surface of TiO2 nanoparticles.JDND1 in DSSC yielded a value of 1.2 mA cm−2 for JSC, which is lower than those obtained for the other two DSSCs with JDND2 and JDND3, meaning the relatively lower concentration of JDND in DSSCs has resulted in a low photo-absorption.While the concentration of photo-sensitizer increased from 10 mg to 20 mg in the photo-anode, JSC increased to 78%.However, the further increase in JDND concentration, from 20 mg to 30 mg, does not follow the same trend as observed in the J–V characteristics of the DSSCs.It is explicit that there is an optimum JDND concentration to provide an uniform surface coverage on the TiO2 nanoparticle layer and that is correlated to an optimum JSC value and thus can lead to a maximum possible photovoltaic performance.As it can be seen, all three DSSCs yieded better VOC values with decent values of FF.Here, JSC is considred the only factor which controlled the overall performance of the reported DSSCs.The upcycling process to extract the natural dye from the jackfruit rags is a highly optimized lab-scale experimental procedure but the extracted dye used in this study was not further purified by any procedures.The present work presents only the application of a waste material in energy harvesting technology without having any further modification.In general, the synthesis of various dye molecules accounts multi-step rigorous procedures with purification steps.As a result, the use of such purified dyes in DSSCs commonly can ensure high performance.As prepared pristine JDND reported in the present work accounts no modifications in the dye in terms of purification and doping.Thus, the lower JSC values are the direct representation of the pristine JDND and might be the limitation of the upcycled jackfruit rags.Fig. 5 and show Nyquist and Bode phase characteristics of the DSSCs utilizing JDND with different concentrations.The charge transfer and recombination resistive characteristics in the DSSCs can be realized from the Nyquist and Bode phase plots shown in Fig. 6 and, respectively.The size of the semi-circles obtained from the three DSSCs shows that the resistance to the recombination increased which significantly facilitates the charge transfer process at TiO2/JDND/electrolyte interfaces.It is well known that interfacial kinetics at the electron acceptor/hole transport material is the dominant factor that determines the charge transfer and recombination processes in the DSSCs.Bode phase plots shown in Fig. 6 look similar in cases of DSSCs utilizing JDND2 and JDND3.This is in good agreement with their performance shown in Fig. 6.Further, the compatibility of JDND with other electrolytes, for example cobalt, was examined in DSSCs utilizing cobalt as a hole transport layer.Fig. 7 shows the J–V characteristic of the DSSC with cobalt as a hole transport layer and JDND2 as a photo-sensitizer.As the results show, the cobalt electrolyte can also yield higher VOC but lower JSC values than that of DSSCs using iodide as a hole transport material.The DSSC with cobalt electrolyte yield values of JSC, VOC, FF and η in the order of 0.4 mA cm−2, 783 mV, 60.5% and 0.3%, respectively.Fig. 7 and show the Nyquist and Bode phase characteristics of the JDND based DSSCs with cobalt electrolyte.The smaller semi-circle obtained from this DSSC compared to those of the DSSCs with iodide electrolyte and the maximum phase angle at a higher frequency assert that the charge transfer and the recombination resistances are affected which directly demonstrates that the interfacial charge transport kinetics are better at TiO2/JDND/iodide interface than that of in TiO2/JDND/cobalt.The two hole transport materials examined in the present work are well known in the excitonic photovoltaic technology.The JDND extracted from the rags of jackfruit waste shows decent JSC and higher VOC values with two important commercial available electrolytes confirming optimum band alignment and thus it can lead to large scale production for commercialization at lower cost.Table 2 summarizes few important natural dye sources and their application in DSSCs with maximum reported η values.In general, all natural resources extracted dyes exhibit low photovoltaic performance.However, photovoltaic research, as a green energy technology, prefers low cost natural materials to develop environmental friendly functional materials for viable DSSC applications.Mangosteen pericarp and Shisonin have been reported to yield values of η of slightly greater than 1% while other sources give lower η than the two aboved mentioned .The JDND reported in the present work is pristine without any further purification process.Thus, we believe the JSC and the overall performance of the DSSC can be further improved if chemical purification steps can be adopted.However, JDND, as a waste derived photo-sensitizer, has shown comparable performance as confirmed by the photo-absorption of the material and the performance of the resulting DSSCs.Various natural sources reported so far are usable materials in various forms including food and cosmetics.The present study demonstrates the possibility of upcycling the waste portion from jackfruits and possible application in DSSCs as a photo-sensitizing candidate.As the waste portion from the jackfruit is considered as a source for the synthesis of photo-sensitizer reported in this work, it is expected to be cost effective to make the resulting photovoltaic technology viable and affordable.The well-known photo-sensitizers N719 and Z907 are commercially available in the price range of USD 300–USD 450 for a quantity of 500 mg.The performance obtained from the JDND in DSSCs is comparable with the reports claiming other natural materials.The illuminated photovoltaic parameters obtained from the DSSCs utilized JDND as a photo-sensitizer such as JSC, VOC and FF are much higher than the other dyes extracted from the natural resources reported .Further, the jackfruit grows in the humid and hot tropics without having much issues.This is an additional advantage for the availability of the waste source material to prepare the photo-sensitizer.The locally available waste as a source material is expected to reduces the material production cost which will eventually help energy harvesting at low cost.Thus, the simple upcycling process of jackfruit rags to achieve photo-sensitizer for DSSCs can be considered as a potential material synthesis platform for cost effective photovoltaics.A simple high throughput process has been demonstrated to upcycle jackfruit rags to derive natural photoactive dye for energy harvesting application.The significant photo-absorption in the visible spectral range confirms that JDND can be considered as a cost effective photo-sensitizer as it is derived from jackfruit rags.The DSSCs employed the JDND showed promising photovoltaic performance leading to the development of low cost photo-sensitizers for energy harvesting applications.
Photo-sensitizers, usually organic dye molecules, are considered to be one of the most expensive components in dye sensitized solar cells (DSSCs). The present work demonstrates a cost effective and high throughput upcycling process on jackfruit rags to extract a natural photo-active dye and its application as a photo-sensitizing candidate on titanium dioxide (TiO2) in DSSCs. The jackfruit derived natural dye (JDND) exhibits a dominant photo-absorption in a spectral range of 350 nm–800 nm with an optical bandgap of ∼1.1 eV estimated from UV–visible absorption spectroscopic studies. The JDND in DSSCs as a major photo-absorbing candidate exhibits a photo-conversion efficiency of ∼1.1% with short circuit current density and open circuit voltage of 2.2 mA⋅cm−2 and 805 mV, respectively. Further, the results show that the concentration of JDND plays an influential role on the photovoltaic performance of the DSSCs due to the significant change in photo-absorption, exciton generation and electron injection into TiO2. The simple, high throughput method used to obtain JDND and the resulting DSSC performance can be considered as potential merits establishing a cost effective excitonic photovoltaic technology.
439
Bioanode as a limiting factor to biocathode performance in microbial electrolysis cells
Bioelectrochemical systems appear to be an interesting research focused on the study of converting waste to energy or value added chemical compounds.Intensive contribution to the knowledge has increased by folds since the last decade.BECs are devices that can perform oxidation and reduction by either producing or consuming current.The devices manipulate the uses of biocatalysts such as living microorganism as whole cell catalysts and specific enzymes as non-viral organic catalysts in their system.The systems are typically named according to their purpose and the use of these biocatalysts, for examples, microbial fuel cell and microbial electrolysis cell both based on their use of microorganisms as catalysts and its production of electrical current and biohydrogen, respectively.The ability of MEC to produce hydrogen and treat wastewaters simultaneously is potentially very useful.Earlier laboratory experiments on hydrogen-producing MECs were conducted by placing cation exchange membrane or anion exchange membrane to isolate both anode and cathode into two separated reaction chambers.As early cathode mainly containing metal-based catalysts for hydrogen evolution, the purpose was to optimise the condition without affecting the microbial community in the anode while clean hydrogen can be obtained in cathode.Even though the advantage of getting highly pure hydrogen was attractive, membrane separators did caused serious drawback during the operation.As membrane separating both anolyte and catholyte but allowing selective ions to pass through, it could increase the accumulation of specific ions and cause imbalance to electrical charges in both chambers.Then after, single-chamber membraneless MECs were introduced to eliminate the impact of electrical charges barrier and internal resistance caused by membrane separators.Despite of better performance in energy usage and higher hydrogen production rate during the initial working stage, single-chamber membrane-less MECs were suffer from performance dropped after long time operation.This is because hydrogen produced from cathode may undergo diverse pathways and converted into low value products which is detrimental to the overall MEC performance.The ability of the anode to re-oxidise hydrogen in the same electrolyte directly increases the electrical current and reduces efficiency caused by reluctant hydrogen cycling phenomena.In additional to the artificial phenomena, proliferation of homoacetogenic or/and methanogenic microorganisms could have reduced hydrogen production and accumulation in the system.It is either been converted into acetate and utilised by the biofilm on the anode or transformed to methane and reducing the purity of the offgas product.Despite the fact that extensive studies have been carried out to solve the mass transport limitations on MECs from double-chamber using separators to membrane-less MECs, none of these studies were focused on the usage of biocatalysts in both anode and cathode.Rozendal et al. began a comprehensive biocatalysts study of a MEC by deploiting three step start-up procedure and polarity reversal method in accordance to turn the electrochemically activated-bioanode into biocathode for hydrogen production.Years after, with the same setup, Jeremiasse et al. studied the first full biological MECs by combining both bioanode and biocathode in which both oxidation and reduction processes was performed by electrochemically active microorgansms.The same study was also performed by Liang et al. to test the effect of bicarbonate and cathode potential on the three step start-up biocathode.In their results, the study was focused on the hydrogen-producing biocathode and its performance based on a range of applied potentials providing little information on the bioanodes.It was assumed that the bioanode could supply sufficient current required for biocathodes to generate hydrogen.Lately, simpler start-up procedure was adapted for enriching autotrophic hydrogen-producing biofilm which making the utilisation of both bioanode and biocathode in a same system more reliable and easier.But once again, the studies were half-cell experiments only focused on biocathode and not information was reported on the anode.Other advantages of using biocathode MECs were also demonstrated in wastewater treatment to remove inorganic substance such as sulphate, nitrate and heavy metals by supplying electrons from an external power supply.However, those studies only involved inorganic reduction reactions without generating any hydrogen.Although information was included on how bioanode react during the polarisation test on one of the studies, the biocathode was meant for sulphate reduction instead of hydrogen production.The minimal electrical potential that is required to drive the reaction is 0.13 V. However, more energy is required due to overpotentials to overcome energy barriers in the system.Thermodynamically, this voltage is relatively smaller required to derive hydrogen from water electrolysis compared to 1.21 V at neutral pH. Meanwhile, it could go up to 1.8–2.0 V for water electrolysis under alkaline condition due to overpotential at the electrodes.Secondly, the robustness of anode should be considered for better MEC performance as it could limit the current supply to cathode.Weak anode with more positive open-circuit potential tends to perform poorly in supporting cathode reduction reaction when a fixed voltage was applied between the electrodes.As a result of weak anode, more current was required from external power to drive the reduction reaction in cathode resulting higher energy consumption.However, this phenomena was mainly found in conventional MECs with abiotic cathode and the question whether the bioanode coupled with biocathode would react the same way still remains concealed.To make the MEC feasible, at least same amount of energy needs to be supplied by the anode to margin the energy invested in the cathode.The first working MEC was published under showing that the principle of hydrogen production from biocatalyst electrodes was possible.However, the system was not optimised and the hydrogen production rate was low whilst higher potentials were applied due to high overpotentials in the system.Jeremiasse et al. reported an MEC system that can reach a maximum current density of 1.4 A/m2 at an applied voltage of 0.5 V or 3.3 A/m2 at an optimum cathode potential of −0.7 V with a biocathode.Their work mostly focused on the MEC system and how the biocathode performed with different applied potentials from a power supply.Most studies only focused on the biocathode itself in a half-cell experiments without much information about bioanode.There is limited information on the function of bioanode as the supporting electrode to a biocathode in MEC systems.Some questions are still unanswered such as how the bioanode responds when the applied potential on the biocathode is changed, what is the limiting potential a bioanode can handle before it loses its ability to produce electrons and will it have the same performance when the set potential on the anode is high?,In this study, the main objective was to enrich the bioanode, test it at higher applied potential −1.0 V and in MEC to assess its robustness.The anode should be able to supply the electrons to the cathode of MEC, therefore reduces the total electric energy required from hydrogen production.We believe that sufficient electron supply from substrate oxidation by bioanode activity is vital to support the hydrogen evolution in a biocathode and therefore maintaining the energy demand from external power supply as low as possible.In order to have an optimum hydrogen production rate from the biocathode, the anode plays an important role as a support to the biological MEC system.It may lower external energy supply to the system and increase energy recovery in term of hydrogen evolution on the one hand and it could be a limitation factor to the whole system together with other problems like substrate crossover and precipitation of mineral on the electrodes on the other.Due to the fact that bio-catalysts will be used in both anode and cathode, double-chamber membrane-based MEC will be used for better environmental control in both chambers.Moreover, special designed electrolytes to accommodate different reactions and end products are vital for the grown and re-generation of independent microbial dominated species in both separated chambers.The information is useful to provide parameters for actual operating condition and to assess the effectiveness and feasible of the system in practical applications.Double-chamber electrochemical cells of 25 mL volume were used.Each chamber was constructed from polyacrylate, with external dimensions of 7 × 7 × 2 cm and with internal dimensions of 5 × 5 cm cross section and 1 cm thickness in the direction of current flow for the fluid space.Two identical chambers were assembled together as described as Fig. S1.A cation exchange membrane was place between the two chambers.Graphite felt was used as electrodes with geometric size of 5 × 5 × 0.5 cm thickness.For bioanode enrichment, platinum coated graphite felt with a platinum loading 0.5 mg/cm2 was used as the cathode.A silver/silver chloride reference electrode was inserted into the anode chamber for monitoring potentials.Anolyte flow through the cell was via two pipe connections at opposite side of the chamber.The cathodic chamber, incorporated a hole for collecting gas products.A 80 mL glass tube, with a septa on the top was fixed into the holes and filled with cathodic medium.The produced gas was collected and measured by the means of water displacement method.Prior to start, both anode and cathode chambers were filled with deionised water and the electrodes were soaked overnight prior to use.Bioanode was first enriched by coupled with Pt-coated cathode.Once the reactor produced a stable current, the Pt-coated cathode was replaced with a new plain graphite felt to start the enrichment of biocathode.The strategy was performed for obtaining bioanode first and then biocathode in order to obtain both bioelectrochemically active electrodes in microbial electrolysis cells.Inoculums were obtained from an anode in a microbial fuel cell and a anode control which has been operated over a year.Those electrodes had been identified as being colonised by dominating microorganism Geobacter sp. and Desulfovibrio sp., respectively.A four-channel potentiostat was used in both enrichment processes.A fixed potential of +0.2 V vs. standard hydrogen electrode was first applied on anode during bioanode enrichment before changing the fixed potential to −0.7 V vs. SHE on biocathode while biocathode enrichment took place.At the initial stage of biocathode enrichment, the applied potential +0.2 V vs. SHE was still fixed on the bioanode in order to protect the bioanode from losing its ability to produce a stable current.Once the Pt-coated cathodes were changed with the plain graphite felts, the cathodic chambers were injected with 25 mL inocula 1:1 in ratio as mentioned above.Hydrogen grade 99.99% was fed into cathode chamber once a day and recycled via a glass tube’s headspace to encourage the growth of hydrogen-oxidising microorganisms for at least a week before switching the fixed potential from anode operation to cathodic operation.A 40-channel data logger was also used in the experiments to record electrodes and cell potentials.Both anode and cathode media were fed continuously through their respective chambers at flow rates of 10 mL/h using peristaltic pumps."The anode medium was as follows:: NaCH3COO 0.41, NH4Cl 0.27, KCl 0.11, NaH2PO4·2H2O 0.66, Na2HPO4·2H2O 1.03, Wolfe’s vitamin solution 10 mL/L and modified Wolfe's mineral solution 10 mL/L.The carbon source in the medium was 10 mM unless stated otherwise.The cathodic solution contained: NaH2PO4·2H2O 0.66 and Na2HPO4·2H2O 1.03 during the bioanode enrichment process while the biocathode medium was prepared as previous study for biocathode enrichment.Control MFCs and MECs were setup in conjunction with the enrichment process of bioanode and biocathode.The same condition and media were used without added any inocula into the reactors.After stable currents were obtained with applied potentials of +0.2 V, the bioanodes were subjected to a range of chronoamperometric test at −0.3, −0.2, 0, +0.2, +0.4, +0.6, +0.8 and 1.0 V. However, the range of the analysis on biocathode was −0.5, −0.7, −0.8, −0.9 and −1.0 V.The biocathodes were analysed under polarisation test after a stable current was observed under applied potential −0.9 V. Cyclic voltammetry were performed either with equipped with FRA32M module or Quad potentiostat.All potentials are reported with reference to the standard hydrogen electrode.The pH and conductivity were measured before the liquids were filtered through 0.2 μm syringe filters.The samples were kept in refrigerator under 4 °C prior analysis.Gas volume produced at the biocathode was captured through a glass tube using water replacement method and the actual gas volume was recorded every 24 h.Then the samples were collected through a septa on the top of the glass tube by using a syringe and analysed using a gas chromatography.Two columns molecular sieve 5A and Chromosorb 101 were used and operated at 40 °C.The carrier gas was research grade 99.99% N2 at a pressure of 100 kPa.A thermal conductivity detector was used to detect the gas based on their retention times.Four bioelectrochemical cells were setup in MFC mode, including two controls.All the operating condition for the controls were the same with experimental bioelectrochemical cells without adding any sources of inoculum.First, the anode of these cells were inoculated and a stable current were produced after a week of culturing under a fixed potential +0.2 V. Next, the bioanodes were subjected to chronoamperometry for at least a day before cyclic voltammetry analysis.The current density produced based on different applied potentials are shown in Fig. 1 as computed from the chronoamperometric results.There are two maximum current densities, 0.361 ± 0.034 A/m2 and 0.372 ± 0.063 A/m2, observed at 0 and +0.6 V, respectively, through a range of applied potential from −0.30 to +1.00 V.The first maximum current at 0 V was due to the contribution of electrogenic bacteria Geobacter sp. based on the inoculums added into the bioelectrochemical cells had been determined dominated by the species.It is postulated that lower enrichment potential was the most suitable potential for the growth of dominating electrogenic species such as Geobacter sp.Meanwhile second higher current occurred at +0.60 V was suspected to either inducing dominating-electrogenic or – non-electrogenic bacteria or both on the anode surface.New redox couples was detected which explained that new electron transfer mechanism might be used at this potential.Intensive works have been done by to study the effect of fixed potential used to enriched bioanode-respiring bacteria community.The enriched bioanode posed different electrochemical behaviour and biofilm characteristic when different potential was applied because of the divergence of bacteria community.The lower the applied potential closed to the bioanode midpoint potential tended to suppress non-electrogenic microbes on the anode whilst favouring the electrogenic species and increasing the growth and portion of the electrogen such as Geobacter sp. in the bioanode community.Other way of obtaining the highly pure community is performing secondary enrichment using the culture from primary bioanode effluent.Table 1 summarised the enrichment potentials which have been used in previous studies.Chronoamperometric analysis revealed that the enriched bioanode could provide almost similar current density at the anode potential over 0 V).Cyclic voltammogram) indicated that enriched bioanode from +0.2 V can survive at higher poised potential up to 1.0 V.The bioanode enriched at +0.2 V produced two half wave with the midpoint potentials at −0.20 and +0.20 V as shown in Fig. 1 and probably resulted from different electron transfer mechanisms.A more positive applied potential may also have resulted in a larger current output, especially when the potential was increased more than +0.4 V. New redox couples at the potential may indicate that new electron transfer mechanism could exist with more positive anode potential.First derivative) analysis showed the first midpoint potential occurred at −0.20 V with both observable active oxidation and reduction activity, however, the second midpoint potential occurred at +0.20 V showed the catalytic activity was more weak compared to the first potential and favours oxidation rather than reduction activity.The −0.20 V mid-point potential was mainly reported in literature and confirmed that it was the activity of electrogenic microbes such as Geobacter sp. and Shewanella sp.This could be either due to the multiple redox centre exposed on the surface of the microbes cells or redox–active mediators secreted by specific microbes which having the potential of −0.2 V.Dark colour biofilm was found on the surface of the bioanode enriched at +0.20 V.The colour changes has been observed by other researchers as a change of biofilm community on the anode, for example the colour of the biofilm changed from orange-brown to thinner and darker colour when the potential increase from −0.15 V to +0.37 V.Based on this report, we suggest that a mixed community dominated by electrogens was grown simultaneously with non-electrogens at +0.20 V. Therefore the community can survive at higher potential and posing the second catalytic activity on +0.20 V when bioanode potential was fixed >+0.40 V. Nonetheless, the bioanode behaviour fixed at potential more than +0.40 V only showed favourable oxidation activity compared to reduction.Free flavins were normally secreted by the electrogen to facilitate the mediated electron transfer between outer membrane cytochromes and electrode.Once the flavins had been excreted from the electrogens, they start to accept electrons from cytochromes located at the outer membrane of electrogen and transfer electron to electrode in a reducing form.The reduced flavins were oxidised on the anode surface and probably been wash out from the continuously-fed bioanode before they could actually recycled back to the electrogens again to transfer electrons.Fig. 2 and show the maximum/minimum point of catalytic waves in the Fig. 1 versus a range of applied potentials.Fig. 2 revealed that the first electron transfer mechanism was still active but exhibit low activity even when the poised potential was set near to the −0.2 V midpoint potential, eg.−0.3 V.The catalytic wave was intensified while the poised potential was set more positive than −0.3 V. Therefore, more substrate could be converted to energy and more electron can be transferred to the electrode.Electrode with more positive poised potential was favourable for the electrogenic bacteria to discharge their used electron and conserve energy via direct electron transfer or mediated electron transfer.The catalytic wave started to decrease after the poised potential was set more positive than 0 V.As observed from the first derivative in Fig. 1, a second catalytic wave started to appear at +0.2 V midpoint indicating that the bioanode could use another pathways to transfer the electron to the anode.Electrogenic bacteria were able to diverge its metabolic pathway to accommodate the changes of conditions for growth and survival, especially when poised potential was changed from its original condition.In additional to the divergent pathways, the changes of microbial community that favour particular microbes but suppress the primary electrogenic microbes might be possible as the species can easily adapt to the changes of potential than the primary species in the community.As a results, the second electron transfer mechanism started to appear when the poised potential was set more positive than +0.20 V. Fig. 2 shows the second peak/bottom points at +0.20 V midpoint, the catalytic activity and was at its best when the potential was set more than +0.60 V.There are two possible explanation on the second midpoint activity, either non-electrogen grew together side-by-side with the electrogen to create a robust biofilm that can use a wide range of high potential anode as electron acceptor or the electrogenic microbes had few electron transfer pathways that could be switched among them when the surrounding environment changes, eg.from +0.20 V to +0.60 V. Although the bioanode could survive in higher potential, toxic compounds and mineral deposition on the surface of the anode could cause the obstruction to the microbes to transfer electrons to anode surface.Besides, the energy force that drives abiotic reaction, eg water electrolysis, was higher compared to biotic reaction when the potential was set more positive.All enriched bioanodes from previous experiment were further deployed in dual-chamber MECs for examining biocathode performance.Fig. 3 shows the cell and electrode potentials of the control cathodes and biological MECs recorded under chronoamperometric tests.Interestingly, bioanode as a biocatalyst maintained its potential in between −0.30 ± 0.02 V when −0.50 to −0.80 V potentials were applied on the cathode.Even though the bioanode could maintain its potential when cathode was set as low as −0.8 V, it started to lose its performance when more current was required to draw from the anode to support cathode at higher working potential more than −0.9 V. On the other hand, the control anode could maintain its potential until −0.9 V was applied to the cathode.Cyclic voltammetry was performed on both bioanode and biocathode after each chronoamperometric test.Fig. 3 and shows the voltammograms of the biocathode and bioanode, respectively.On the other hand the relationship between hydrogen production and current density with cathodic potentials is shown in Fig. 3.By analysing the biocathode voltammogram, the first catalytic activity occurred at −0.35 V which is suspected to be the non-hydrogen-producing activity whilst the second catalytic activity started to occur at −0.8 V and below.A small hydrogen oxidation peak happened at −0.6 V proved the biocathode reversible catalysis activity accelerated by a specific enzyme called hydrogenase.Meanwhile, based on the Fig. 3, bioanodes which worked as counter electrode lost their ability to catalyse oxidation reaction after chronoamperometric test.As per hypothesis mentioned in the introduction, the amount of electron consumed in cathode should be, at least, fulfilled by the electron produced by anode by substrate oxidation to balance and/or reduce energy demand from external power supply, the bioanode no longer retain its bio-catalytic activity at the end.For instance, at cathodic potential −1.0 V, the current density was recorded as 0.99 A/m2 but the maximum current density that the bioanode could produce was 0.36 A/m2.The bioanode, at least, need to provide an extra 0.63 A/m2 to close this energy gap.As a result of they could not produce enough current to support the biocathodes, power supply forced anode potential to increase sharply to induce abiotic reaction eg.water electrolysis or produce peroxides with the present of oxygen.The growth of the bioanode were totally halted and probably killed by toxic products produced abiotically through a high potential.Moreover, oxygen may be produced from water electrolysis due to the more positive potential was applied on the anode after the biofilm could not keep up its oxidation activity to produce more electron.Additional oxygen contamination in the system would subsequently trigger the formation of peroxides and other inorganic anions which are toxic to the bioanode.The abiotic reactions were dominated in the anode as power supply had to withdraw high current from anode to support the current consumed in cathode.There was no considerable current flow or hydrogen production activity when applied potential was set from −0.5 V to −0.7 V as shown in Fig. 3.Although substantial current started flowing into the biocathode at −0.8 V, the current yet favoured any hydrogen production in the biocathode unless more negative potentials were used.Cathodic overpotential could be the main reason why potentials lower than −0.8 V was required.Theoretically, hydrogen evolution potential is −0.42 V.That means at least −0.38 V was lost in term of overpotential in this setup.The outcome is accordant to the previous study on a hydrogen-producing microorganism, Desulfovibrio sp., that equal or less reducing potential than −0.9 V is needed due to insufficient electron transfer above −0.8 V.In contrary, mediators was used to reduce the overpotential between cathode and cell surface and facilitate electron transfer.Villano et al. tested methyl viologen in their study and proved that the mediator could effectively reduce the overpotential up to 0.3 V and brought the potential closed to −0.45 V, which is slightly lower than standard hydrogen reduction potential −0.41 V. However, the latter solution appears not suitable in practical application as mediator will be required most of the time.Abiotic current flow became significant with an applied potential more negative than −0.90 V. However, the biocathode only consumed significant amount of energy starting from −0.70 V and below as moderate current flow was observed at this point.Therefore, the working potential of biocathode in this system should be between −0.70 and −0.90 V.In order to protect the bioanode from losing its performance as biocatalytic electrode, maximum current that can be withdraw from the bioanode is determined as 0.36 A/m2 from Fig. 1.If same amount of energy was required to support the biocathode then the maximum working potential that can be applied is about 0.84 V which is determined from Fig. 3 assuming that the same amount of current produced in anode was supplied to the cathode.This information is important to determine the optimum condition for the system to promote biohydrogen production and not water electrolysis.Significant amount of hydrogen was produced at potential more negative than −0.80 V even a reductive current was significant observed before this potential.It seems that a minimum energy is required to overcome the activation energy, which leads to overpotentials and activate microorganism’s hydrogenase to produce hydrogen.A strategy to applied lower potentials in chronoamperometry form were used in few studied to examine hydrogen production until a significant hydrogen was detected.The reason why higher potential was required is to compensate for the hydrogen lost by diffusion and overpotentials such as higher pH electrolyte.Another strategy to promote hydrogen production is to keep hydrogen partial pressure as low as possible by continuously removing it from the system and maintain the pH of electrolyte at least around 7.0.The pH of electrolyte is normally maintained between 6.5 and 7.5.If the value is lower than 6.0 or under acidic condition, less energy will be consumed and higher applied potential could be used as higher concentration of proton is available in bulk solution.The latter strategy did increased the hydrogen yield, however, it also could increase the cost of investment and operation because of the complexity of the system configuration and controlled devices that had been used.Furthermore, a portion of hydrogen lost through membrane depends on operating temperature.Higher temperature tends to increase the diffusion coefficient as reported in Rozendal et al.Besides, it also depends on the natural of the MEC either to produce hydrogen or clean inorganic matters.For instance, standard reduction potential of sulphate is much lower than proton production.If the MEC system was used to clean sulphate contaminates instead of hydrogen production, then slightly higher potential could be applied.Table 2 presents an overview of the usage of biocathode in hydrogen production and non-hydrogen producing purposes.Once the bioanodes were enriched with stable current output, they were tested in different substrate concentrations to observe the effect of the concentration in term of current density and Coulombic efficiency.Fig. 4 shows the current density and CE plot pertaining to acetate concentration up to 20 mM.Modified Monod equation was used to determine the Monod coefficient Imax and Ks as mentioned in Eq.Based on the equation, Imax and Ks were determined as 0.5138 A/m2 and 1.5163 mM.In this study, 10 mM acetate concentration was used because it is the most applicable concentration which could sustained about 86.8% of Imax and 45% Coulombic efficiency.Even higher acetate concentration could bring up the current density, the CE dropped significantly to 15% at 20 mM acetate concentration.Meanwhile, lower acetate concentration generated lower current which may jeopardised the whole MEC system in term of energy recovery.As a result there would be not enough electrons to be supplied to cathode for hydrogen evolution.Fig. 5 summarised the overall energy recovery in term of electrical power, substrate oxidation and hydrogen produced.From the graphs, it seems that external power supply play an important role in driving hydrogen production in cathode rather than electron-producing anode.For instances, at cathodic potential −1.0V, ηs biocathode was significantly high about 1317% and it means larger portion of the hydrogen recovery was not contributed by substrate oxidation in bioanode.However, it is quite opposite for ηe biocathode where the efficiency is 103% where the excessive 3% was not provided by the electrical energy.Biocathode energy recovery was first observed starting from −0.8 V cathodic potential compared to the control where still remains zero.A remarkable overall recovery nearly 100% was recorded at cathodic potential −1.0 V.This study demonstrated that the performance of bioanode can be a factor that can limit the biocathode in a MEC system.The bioanode enriched at −0.2 V vs. SHE can survive at higher applied potential up to 1.0 V and posted two significant catalytic activities at midpoint potentials −0.2 V and +0.2 V.The catalytic waves could be shifted between each other depend on the potentials fixed on the anode.This may due to community shifted or the changes of metabolic pathways of dominating microbes.Meanwhile, biocathode could produce hydrogen with applied potential lower than −0.8 V, said −0.9 V. However, the applied potential −0.9 V on biocathode killed the bioanode as it was not able to generate enough current to support the need of the biocathode.In the operation of a biocathode, the potential vs. current density behaviour for effective operation during hydrogen evolution may not be compatible with the effective operation of the bio-anode.The obtained current density may result in less than ideal anode potentials for effective anode biofilm operation at a given cathode potentials.Applied potential of 0.84 V was determined as maximum value that can be applied to biocathode without overloading the bioanode.The capability and robustness of bioanode are important to ameliorate the limitation to biocathode and whole system.
The bioanode is important for a microbial electrolysis cell (MEC) and its robustness to maintain its catalytic activity affects the performance of the whole system. Bioanodes enriched at a potential of +0.2 V (vs. standard hydrogen electrode) were able to sustain their oxidation activity when the anode potential was varied from −0.3 up to +1.0 V. Chronoamperometric test revealed that the bioanode produced peak current density of 0.36 A/m2 and 0.37 A/m2 at applied potential 0 and +0.6 V, respectively. Meanwhile hydrogen production at the biocathode was proportional to the applied potential, in the range from −0.5 to −1.0 V. The highest production rate was 7.4 L H2/(m2 cathode area)/day at −1.0 V cathode potential. A limited current output at the bioanode could halt the biocathode capability to generate hydrogen. Therefore maximum applied potential that can be applied to the biocathode was calculated as −0.84 V without overloading the bioanode.
440
Clinicians' awareness of the Affordable Care Act mandate to provide comprehensive tobacco cessation treatment for pregnant women covered by Medicaid
Tobacco use during pregnancy is the most common cause of preventable poor infant outcomes for which effective interventions exist.In addition, prenatal smoking is associated with an estimated $122 million in excess infant health care costs at delivery in the United States.Significant disparities exist between low and high socioeconomic status women, particularly among women enrolled in Medicaid.Smoking prevalence during and after pregnancy was 17.6% and 23.4%, respectively, among Medicaid enrolled women versus 5.2% and 9.3% among privately insured women.Considering that Medicaid is the largest payer of prenatal and delivery healthcare and covers 45% of US births, the potential cost-savings of eliminating tobacco use and averting poor birth outcomes in the pregnant Medicaid population could be substantial.The Affordable Care Act requires states to provide tobacco-cessation services, including counseling and pharmacotherapy, without cost-sharing for pregnant traditional Medicaid beneficiaries effective October 2010.However, it is unknown the extent to which obstetricians–gynecologists are aware of the Medicaid tobacco-cessation benefit.We examined awareness of the Medicaid tobacco-cessation benefit in a national sample of obstetricians–gynecologists and assessed whether reimbursement would influence their cessation practice.These findings can be useful to inform state maternal and child health and tobacco control efforts to reduce prenatal smoking.During February–August 2012, the American College of Obstetricians and Gynecologists conducted a mailed survey of a national stratified-random sample of practicing obstetricians–gynecologists.Detailed methodology has been described previously.Briefly, 425 Collaborative Ambulatory Research Network and 599 non-CARN members were invited to participate.CARN members are clinicians who volunteer to participate in ACOG surveys.Those invited received an introductory letter and up to 3 reminders.Response rates were 52% and 31%.The sample was further restricted to clinicians providing obstetrical care.The study was deemed exempt from review by ACOG and the Centers for Disease Control and Prevention Institutional Review Boards.The survey focused on practice patterns and opinions related to patient tobacco use.The two survey questions analyzed for this study were: 1) “Are you aware that the ACA includes a provision that requires that pregnant women on Medicaid receive coverage for comprehensive smoking cessation services, including both counseling and pharmacotherapy?,; 2) “How much influence would reimbursement for cessation services for pregnant women on Medicaid under the ACA have on how you provide cessation services?,Results were stratified by the categorical response of the percentage of pregnant Medicaid patients seen by respondents.Chi-squared tests were used to assess significant associations.Analyses were conducted in 2014.The majority of respondents were female and non-Hispanic White; on average, respondents completed residency 19 years ago.Most respondents practiced in urban/suburban locations, and 30.6% provided comprehensive primary care for women.About a quarter of respondents had > 50% pregnant Medicaid patients; 61.6% had < 50% pregnant Medicaid patients; and 13.2% had no pregnant Medicaid patients.Overall, 83% of obstetricians–gynecologists were unaware of the Medicaid tobacco-cessation benefit for pregnant patients.Lack of awareness increased as the percentage of pregnant Medicaid patients in their practices decreased.Of respondents who saw pregnant Medicaid patients, one-third said reimbursement would increase their cessation services, and nearly 40% of those with > 50% Medicaid patients said they would increase their services."A substantial fraction of respondents reported that cessation services would not change because reimbursement wouldn't address ‘existing barriers to delivering service’, and 16.2% said they did not know how reimbursement would affect their cessation practices.We found that 4 out of 5 obstetricians–gynecologists surveyed in 2012 were unaware of the ACA provision that required states to provide tobacco cessation coverage for pregnant traditional Medicaid beneficiaries.However, one-third of respondents reported that reimbursement would influence them to increase cessation services, and an even greater percentage was seen among respondents who saw more Medicaid-enrolled patients.A previous study suggests that states with more comprehensive Medicaid coverage of tobacco cessation treatments, primarily through coverage of medications, resulted in 1.6 percentage point reduction in smoking before pregnancy and a small increase in infant gestation.In addition, as counseling will also be covered by the ACA mandate, a meta-analysis of 77 trials found that psychosocial interventions are effective in increasing the proportion of women who stop smoking in late pregnancy, and women who received psychosocial interventions had an 18% reduction in preterm births and infants born low birth weight.Hence, reducing barriers to cessation treatments, as such through a comprehensive tobacco cessation benefit, could potentially allow more smokers to access treatment, increase cessation and improve infant outcomes among pregnant Medicaid enrollees.Comprehensive and well-publicized benefits have shown larger effects in quitting among the general population of Medicaid-enrollees in the state of Massachusetts, including among young people and women.For providers, this promotion included the development of fact sheets, with rate and billing codes, a pharmacotherapy pocket guide, and new intake and assessment protocols that were widely disseminated to health care systems and facilities.In addition, the state also directed educational campaigns to consumers, tracked the use of the benefits, and provided feedback and recognition to providers who were regularly referring patients.Acknowledging the importance of raising awareness of the tobacco cessation benefit, the Centers for Medicare and Medicaid Services placed information on their website about Medicaid tobacco-cessation benefits for pregnant women and non-pregnant enrollees.However, broad state promotion and outreach of the Medicaid tobacco-cessation benefits, as noted earlier, for pregnant women can help to increase treatment utilization.A substantial percentage of respondents reported that reimbursement would be insufficient to address existing barriers for cessation."Provider barriers that have been reported in a previous analysis of this data included time limitations to deliver cessation services in prenatal care visits and patient's resistance to intervention.While reimbursement may improve service provision and broad promotion of the benefits may increase awareness, additional strategies, such as provider training and healthcare system changes to facilitate stream-lined screening and treatment, are also important to increase treatment utilization.Educational campaigns directed to consumers could also stress the importance and/or benefits of quitting smoking and support that prenatal care staff can provide.This study has limitations to note.First, the study is limited by the low survey response rates, which is consistent with previous ACOG surveys.However, nonresponse bias has been shown to be minimal among physician groups compared to other groups.Second, the sample size is small.For our analysis of how reimbursement would influence cessation services, we had limited power to test for differences in whether reimbursement would influence cessation services by percentage of Medicaid patients seen.Finally, these data are self-reported, and we did not verify information regarding awareness with their actual cessation or billing practices.In conclusion, four out of five obstetricians–gynecologists surveyed in 2012 were unaware of the ACA provision that required states to provide tobacco cessation coverage for pregnant traditional Medicaid beneficiaries, and a third of respondents serving pregnant Medicaid patients reported that reimbursement would influence them to increase their cessation services.Promoting awareness of the Medicaid tobacco-cessation benefit among all medical providers who see pregnant and reproductive-aged women could help to reduce treatment barriers, thereby increasing cessation and improving maternal and infant health.The authors report no conflict of interest.
The Affordable Care Act (ACA) requires states to provide tobacco-cessation services without cost-sharing for pregnant traditional Medicaid-beneficiaries effective October 2010. It is unknown the extent to which obstetricians-gynecologists are aware of the Medicaid tobacco-cessation benefit. We sought to examine the awareness of the Medicaid tobacco-cessation benefit in a national sample of obstetricians-gynecologists and assessed whether reimbursement would influence their tobacco cessation practice. In 2012, a survey was administered to a national stratified-random sample of obstetricians-gynecologists (n = 252) regarding awareness of the Medicaid tobacco-cessation benefit. Results were stratified by the percentage of pregnant Medicaid patients. Chi-squared tests (p <. 0.05) were used to assess significant associations. Analyses were conducted in 2014. Eighty-three percent of respondents were unaware of the benefit. Lack of awareness increased as the percentage of pregnant Medicaid patients in their practices decreased (range = 71.9%-96.8%; P= 0.02). One-third (36.1%) of respondents serving pregnant Medicaid patients reported that reimbursement would influence them to increase their cessation services. Four out of five obstetricians-gynecologists surveyed in 2012 were unaware of the ACA provision that required states to provide tobacco cessation coverage for pregnant traditional Medicaid beneficiaries as of October 2010. Broad promotion of the Medicaid tobacco-cessation benefit could reduce treatment barriers.
441
Value of the 8-oxodG/dG ratio in chronic liver inflammation of patients with hepatocellular carcinoma
The mechanisms of hepatocarcinogenesis are incompletely understood .Mounting evidence indicates that oxidative DNA damage caused by reactive oxygen species and reactive nitrogen species accumulation in chronic liver inflammation may play a role in hepatocarcinogenesis .A number of investigators have suggested that hepatocellular carcinoma develops from the malignant transformation of hepatocytes, as these cells acquire multiple ROS- and RNS-induced mutations in key genes that control cell proliferation and death .This assumption has been supported by several observations.The prevalence of chromosomal gene alterations increases with the progression from chronic hepatitis to fibrosis, cirrhosis, low grade dysplasia, high grade dysplasia, early HCC, moderately differentiated HCC and finally advanced HCC .The epidemiological data demonstrate that there is a strong relationship between chronic liver inflammation and hepatocarcinogenesis .Approximately 80% of HCC cases are associated with chronic liver inflammation and liver cirrhosis .Chronic liver inflammation may produce ROS and RNS .ROS and RNS may cause DNA oxidation, nitrosylation, nitration, and halogenation, leading to mutations in key genes, including oncogenes and tumoural suppressor genes .These mutations likely confer growth advantages on these cells, leading to the transformation of normal hepatocytes and their evolution towards HCC.As mentioned in previous articles, 8-oxodG is one of the main oxidation products of guanosine and is induced by ROS and RNS .The presence of 8-oxodG in DNA leads to misreading and misinsertion of nucleotides during DNA synthesis, leading to G→T and G→C conversions .8-oxodG can be produced from continuous oxidative stresses associated with chronic inflammation .A previous study revealed that the 8-oxodG levels are elevated in some human pre-neoplastic lesions and cancerous tissues .Moreover, 8-oxodG has also extensively been used as an indicator of oxidative DNA damage and various diseases .The picture is less clear for hepatocarcinogenesis.Furthermore, it has not been completely confirmed whether the genetic alterations induced by oxidative DNA damage are involved in the malignant transformation of normal hepatocytes .Currently, one issue with the data is the concern about whether the artefactual production of 8-oxodG during DNA processing leads to an over- or underestimation of the role of 8-oxodG in malignant transformation .In the past, the accurate measurement of 8-oxodG in samples of human liver tissue has been hampered by limitations in the amount of tissue available for study, the incomplete release of nucleosides, the artefactual formation of 8-oxodG during tissue processing, and the limits of detection of the assays employed to measure the 8-oxodG levels .The European Standards Committee on Oxidative DNA Damage and other investigators have attempted to develop reliable protocols for sample preparation and analysis, with minimal dG oxidation as a consequence of sample preparation .However, both overestimation and underestimation of 8-oxodG concentrations have been reported by ESCODD using various methods .The conclusion reached from reviewing the previous studies, including those from ESCODD, is that the current techniques are not suitable to analyze the 8-oxodG levels in non-malignant liver tissues and tumors of HCC patients unless they are modified."Therefore, in this study, the protocols for extracting and hydrolyzing patients' DNAs were optimized, and the 8-oxodG levels were measured.This study evaluated the dose-dependent relationships between the amount of 8-oxodG and clinical variables in 54 patients with HCC, with particular attention paid to optimizing the experimental conditions to minimize the formation of 8-oxodG during the process.The aim of this study was to examine the role of oxidative DNA damage in the evolution of HCC by particularly focusing on oxidative DNA damage in chronic liver inflammation.Frozen liver tissues from 54 HCC patients were studied in this experiment.These liver tissues were collected by medical doctors in hospitals in Australia and South Africa.Twenty-three patients from The Princess Alexandra Hospital and The Royal Brisbane Hospitals, Brisbane, Australia and 31 patients from South Africa were enrolled in this study.The Australian cases were drawn from a tissue bank, while the clinical material from South Africa was obtained from Professor Michael Kew.The details of each case are presented in Table 1.Both non-malignant liver tissues and HCC tissues were available for some cases, while only non-malignant liver tissues or HCC tissues were available from other cases.In addition, the clinical data were not complete for all cases; therefore, the data on the presence or absence of cirrhosis or on the risk factors for chronic liver disease were not available for some cases.These studies were approved by the Human Research Ethics Committee of the Royal Brisbane Hospital and the University of Queensland.Zinc chloride, magnesium chloride, calcium chloride, sodium chloride, sodium acetate, guanidine thiocyanate, 2,2,6,6-tetramethylpiperidine-Noxyl, chloroform, 2-propanol, isoamyl alcohol, nuclease P1, proteinase K, RNase A, alkaline phosphatase, catalase and Tween 20 were purchased from Sigma.The Phase Lock Gel tubes were purchased from Eppendorf-Netheler-Hinz.Tris base was purchased from Amresco.High Performance Liquid Chromatography – tandem mass spectrometry was performed with a PE/Sciex API 300 mass spectrometer equipped with a turbo-ion spray interface coupled to a Perkin Elmer series 200 HPLC system from Queensland Health Scientific Services.Homogenization buffer was prepared with 20 mM Tris, 5 mM magnesium chloride, 50 U/ml catalase and 1 mM TEMPO and then adjusted to pH 7.5.Tween 20 was dissolved in homogenization buffer to a final concentration of 0.5% Tween 20.Guanidine thiocyanate was dissolved in Milli-Q water to produce a 4 M GTC DNA extraction solution containing 4 M GTC, 50 U/ml catalase and 1 mM TEMPO.One volume of isoamyl alcohol was mixed with 24 volumes of chloroform to produce the Sevag solution, as previously described.Antioxidants were added to all solutions, except 2-propanol and 70% v/v ethanol.Milli-Q water was used throughout.All solutions, except the Sevag solution, were stored in the dark at 4 °C in plastic bottles to avoid metal contamination from glass.The RNase A buffer consisted of 100 µg/ml RNase A, 2 mM calcium chloride, and 20 mM Tris, pH 7.5.Proteinase K was added to 2 mM calcium chloride and 20 mM Tris buffer to prepare a 20 mg/ml proteinase K solution, which was stored at −20 °C.The hydrolysis buffer contained 25 mM sodium acetate and 0.1 mM zinc chloride.The pH value of the hydrolysis buffer was adjusted to 5.3, and it was then stored in a cold room.Nuclease P1 was dissolved in hydrolysis buffer containing 50 mM sodium acetate and 0.2 mM zinc chloride to produce a 2.5 µg/µl solution.This solution was divided into small aliquots and stored at −20 °C.The alkaline phosphatase solution and catalase were stored at 4 °C.Previous studies have shown that proteinase K digestion at 37 °C generates more artefactual 8-oxodG than the cold 4 M GTC method at 0 °C.In this study, DNA from human liver tissues was isolated with these two methods to compare their effects on the efficiency of DNA extraction and the generation of 8-oxodG DNA so that the method that generated the least amount of artefactual 8-oxodG could be used for the subsequent isolation of DNA from human liver tissues.Four frozen samples of human HCC tissues and four samples of normal human liver tissues were studied.The samples were distributed according to the method of extraction.Each group consisted of two HCC samples and two normal liver tissue samples.In a cold room, 50 mg of each tissue was homogenized with a high speed homogenizer.The homogenate was centrifuged at 1000g for 5 min.The supernatant was discarded, and the nuclear pellets were washed twice with Tween 20 buffer, followed by centrifugation at 1000g for 5 min after each wash.The nuclear pellets of Group One were used to isolate DNA by proteinase K digestion, whereas the Group Two samples were stored in the cold room to isolate DNA using the cold 4 M GTC method.The nuclear pellets of each sample in Group one were dissolved in 540 µl of RNase A buffer and incubated in a 37 °C water bath for 30 min.Subsequently, 14 µl of proteinase K was added and incubated at 37 °C for 45 min.The solution was transferred to a prespun 2.0-ml PLG tube and 560 µl of Sevag solution was added.The tubes were centrifuged at 13,000g for 5 min.This led to the formation of a mixed organic/aqueous solution in which the proteins and lipids precipitated in the organic phases in the PLG tubes and the DNA remained in upper aqueous phases.This supernatant was transferred to a 2-ml PLG tube, and then, an additional 560 µl of Sevag solution was added.These tubes were mixed and centrifuged at 13,000g for 5 min.The upper aqueous phase containing the DNA was transferred to a new 2-ml tube.Seventy-five microliters of a 5 M sodium chloride solution and 635 µl of isopropanol were added to each tube.After mixing, DNA was precipitated at −20 °C for 15 min and then centrifuged at 20,800g for 10 min.The supernatant was discarded, and DNA was stored at −80 °C prior to hydrolysis.The crude nuclei of each sample in Group Two were completely dissolved in 850 µl of cold 4 M GTC solution in a cold room.The solution was transferred to a 2-ml PLG tube.Eight-hundred-fifty microliters of Sevag solution were added to this tube.The tube was centrifuged at 13,000g, and then, the upper phase containing the DNA was transferred to 2-ml PLG tube before an additional 850 µl of Sevag solution was added.These tubes were mixed and centrifuged at 13,000g for 5 min.Then, the upper aqueous phase containing the DNA was transferred to a new 2-ml tube and 850 µl of 2-isopropanol was added and incubated at −20 °C for 15 min to precipitate the DNA.DNA was pelleted by centrifugation at 20,800g for 10 min, and the samples were stored at −80 °C prior to hydrolysis.The DNA samples from Groups One and Two were hydrolyzed with 2 µg of nuclease P1 and 1 unit of alkaline phosphatase for 1 h at 50 °C for 1 h.The concentrations of 8-oxodG and dG were measured by HPLC-MS/MS.Accumulating data reveal that the ratios of 8-oxodG/dG vary in repeated measurements of the same samples from different individuals and in different laboratories , as well as those using different methods for sample preparation and different methods for detecting 8-oxodG .These variations were up to several orders of magnitude.For example, the quantity of 8-oxodG from lymphocyte DNA was 4.24 per 106 dG measured using HPLC, whereas it was 0.34 8-oxodG per 106 dG measured using the comet assay .Concentrations of 8-oxodG ranged from 2.23 to 441 8-oxodG per106 dG in DNA from pig liver using HPLC techniques .The inconsistency in the quantitation of the 8-oxodG/dG ratios implies that the actual amount of 8-oxodG in DNA cannot be determined as a result of the unsuitable hydrolysis conditions during processing.The release of 8-oxodG from DNA during enzymatic hydrolysis is influenced by a few factors, including the DNA concentration, choice of enzymes, enzymatic activities, incubation time and incubation temperatures.Excessive DNA, unsuitable enzymes, short incubation times and low temperatures may cause incomplete hydrolysis of DNA, while high temperature generates artefactual 8-oxodG.These drawbacks can result in an overestimation or underestimation of the 8-oxodG concentrations .Currently, most protocols of DNA hydrolysis are performed with approximately 100 µg of DNA, 1 to 20 µg of nuclease P1 and 0.5–20 U/ml of alkaline phosphatase for a few minutes to hours at 37 °C or overnight in a cold room .However, 100 µg of DNA are not completely hydrolyzed by 1 U/ml of nuclease P1 during a 1.5 h incubation hours at 37 °C, followed by a 1 h incubation at 37 °C with 1 U/ml of alkaline phosphatase, even if the doses of nuclease P1 or alkaline phosphatase are increased .Nuclease P1 and alkaline phosphatase can more rapidly and efficiently hydrolyze DNA at high temperatures, such as 65 °C, compared to 37 °C, but incubation periods in excess of 15 min at 65 °C increase the levels of artefactual 8-oxodG .The denaturation of DNA into single stands by incubating it at high temperatures is beneficial for complete hydrolysis , but the 100 °C temperature used to denature DNA may increase the levels of artefactual 8-oxodG.Another study shows that 100 µg of DNA plus was completely hydrolyzed by 1 µg P1 and 1 U/ml alkaline phosphatase in 1 h at 50 °C and produced less artefactual 8-oxodG .Therefore, in this study, five different hydrolysis conditions were used to compare the generation of 8-oxodG in calf thymus DNA.Based on these data, suitable hydrolysis conditions were optimized for the subsequent hydrolysis of the DNA from human liver tissues.One-hundred micrograms of calf thymus DNA were dissolved in 90 µl of a hydrolysis solution and then hydrolyzed with 1 µg, 5 µg, 10 µg or 20 µg of nuclease P1 plus 1 unit of alkaline phosphatase using the following conditions:The DNA solution was digested with nuclease P1 and alkaline phosphatase for 1 h at 50 °C.The DNA solution was first boiled at 100 °C for 5 min in a microwave oven and then rapidly chilled on ice for 2 min.Next, nuclease P1 and alkaline phosphatase were added to digest the DNA and incubated for 1 h at 50 °C.The DNA solution was incubated with nuclease P1 for 1 h at 50 °C before it was incubated with alkaline phosphatase for 1 h at 37 °C.The DNA solution was digested with nuclease P1 for 10 min at 65 °C and then treated with alkaline phosphatase for 1 h at 37 °C.The DNA solution was digested with nuclease P1 for 30 min at 37 °C and then treated with alkaline phosphatase for 1 h at 37 °C.After enzymatic hydrolysis, the solution was transferred into a 1.5-ml Phase Lock Gel tube.One-hundred microliters of Sevag solution was added to each tube, and the tubes were briefly mixed, and centrifuged at 13,000g for 5 min.The proteins and Sevag solutions precipitated in the organic phase of the tube, while 8-oxodG, dG and DNA remained in the upper phases.The supernatant was transferred to a new 0.5-ml tube and stored at −80 °C before the 8-oxodG levels were measured by HPLC-MS/MS."The DNA from patients' liver tissues was isolated using the cold 4 M GTC method.All procedures were performed in a cold room, unless stated otherwise.The dissection materials, reagents, 10-ml flat-bottom tubes and equipment used for DNA extraction were pre-chilled.Approximately 50 mg of frozen liver tissue per sample was cut on aluminum foil on dry ice, weighed, and immediately placed into numbered 10-ml flat-bottom tubes.One milliliter of ice-cold homogenization buffer was added to each numbered 10-ml tube on ice.The samples were completely homogenized with a power homogenizer for three minutes.After each sample was homogenized, the pestle was sequentially washed in pre-chilled 100% alcohol, Milli Q water and 100% alcohol, and then dried using a Kimwipe tissue.The homogenized solution for each sample was transferred from the 10-ml flat-bottom tube into a marked 2-ml centrifuge tube.One milliliter of homogenization buffer was added to the 10-ml flat-bottom tube to wash the tube and then added to the 2-ml tube.The solution was mixed with a vortex mixer for 1 min and then placed on ice for 5 min before the crude nuclei were pelleted by centrifugation at 1000g for 10 min.After discarding the supernatant, which contains membranes, proteins, mitochondria and most of the RNA, the nuclear fraction was re-suspended in one ml of Tween 20 buffer and placed on ice for 5 min.The samples were centrifuged at 1000g for 10 min, the supernatant was withdrawn, and the nuclei were re-suspended in one ml of Tween 20 buffer and placed on ice for 5 min.The samples were centrifuged a final time at 1000g for 10 min, and the supernatants were discarded,The pellets were dissolved in 850 µl of cold 4 M GTC by pipetting up and down to produce a clear solution and then incubated on ice for 20 min.After the pellets were completely dissolved, the DNA solution was transferred to a 2.0-ml pre-spun PLG tube.Eight-hundred-fifty microliters of cold Sevag solution was added to this tube, which was shaken by hand for 1 min, and then centrifuged at 13,000g for 5 min.On completion, the supernatant containing the DNA was transferred into a 2-ml tube and 850 µl of cold isopropanol were added and incubated at −20 °C for 1 h to precipitate the DNA.The solution was centrifuged at 16,000g for 10 min at 4 °C, and the supernatant was discarded.The pellets were re-suspended in 800 µl of cold 70% v/v ethanol and centrifuged for 3 min at 16,100g at 4 °C.The supernatant was carefully discarded before the tubes were drained by inversion on absorbent paper.One-hundred microliters of pre-chilled hydrolysis buffer was added to each tube to completely re-suspend the DNA.The concentrations of the DNAs isolated from the patients’ liver tissues were measured using a spectrophotometer.Two microliters of the DNA suspension was mixed with 98 µl of Milli Q water.The absorbance of the resulting solution was measured at A260 nm and the DNA concentration was calculated.Based on the optimization of the hydrolysis conditions, 2 µg of nuclease P1 and one unit of alkaline phosphatase were added to each tube and then mixed.The solution was incubated for 1 h at 50 °C.After incubation, this solution was transferred into a 0.5 ml pre-spun PLG tube, 100 µl of Sevag solution was added and the tube was briefly mixed before being centrifuged at 13,000g for 5 min.After centrifugation, the supernatant solution containing hydrolyzed DNA was transferred to a fresh 0.5-ml tube and stored at −80 °C prior to the HPLC analysis.The concentrations of 8-oxodG and dG were determined with HPLC-MS/MS by an expert in Queensland Health Scientific Services.Separation was achieved using an Altima C18 column at 35 °C and a flow rate of 0.8 ml min−1, with a linear gradient starting at 100% A for 0.1 min, ramped to 80% B in 12 min, held for 2 min and then to 100% A for 1 min and equilibrated for 7 min.The dead space in the system modified the actual gradient at the column and was equivalent to approximately 3 min at 100% A before the start of the gradient.Under these conditions, the retention times for dG and 8-oxodG were 8.58 and 8.75 min, respectively.The column effluent was split to achieve a flow rate of 0.25 ml per minute to the mass spectrometer.The mass spectrometer was operated in the multiple reaction-monitoring mode using nitrogen as the collision gas and a collision energy of 20 eV.The transitions from m/z 284.2 to 168.1 for 8-oxodG and 268.2 to 152.1 for dG were monitored with a residence time of 350 ms. The samples were quantified by comparing the peak areas of the standards to the peak areas of the samples.Using a 50-µl injection volume, the limit of detection using this method is typically 1 µg/l for 8-oxodG and 50 µg/l for dG.Some interference from other components present in the sample has been noted, particularly in the dG determination.All statistical calculations were performed and the graphs were plotted using GraphPad Prism software version 6.0.In the graphs that chart the 8-oxodG ratios, all of the points cannot be shown because some of the 8-oxodG/dG ratios deviated from the others at the bottom of the graphs.Therefore, according to the statistical guide of GraphPad Prism software version 6.0, the 8-oxodG/dG ratios were first transformed to logarithms."The logarithms of the 8-oxodG/dG ratios of the various groups of patients were evaluated for the normality of the distribution by the D'Agostino & Pearson omnibus normality test to decide whether a nonparametric rank-based analysis or a parametric analysis should be used.Statistically significant differences in the unpaired hepatic 8-oxodG/dG ratios of the two groups of patients were compared with the Mann–Whitney U test for data without a normal distribution or with the unpaired t test for data with a normal distribution.Statistically significant differences between the logarithms of paired hepatic 8-oxodG/dG ratios of non-malignant liver tissues and those of malignant tissues from the same HCC patients were compared with the Wilcoxon matched-pairs signed rank test for the data without a normal distribution or with the paired t test for the data with a normal distribution.P values less than 0.05 were considered statistically significant.The box-whisker plots expressed the logarithms of the 8-oxodG/dG ratios.The results were expressed as the means±standard derivation or as medians.In this study, to determine the effects of temperature on the formation of 8-oxodG during DNA extraction, human liver DNA was extracted using the cold 4 M GTC method and the warm RNase A/proteinase K method.These DNA samples together with commercial calf DNA used as a control were hydrolyzed and the 8-oxodG levels were measured by HPLC-MS/MS.The results indicated that the cold 4 M GTC method had the lowest ratios of 8-oxodG/dG, while the ratio from the same liver tissue using the warm RNase A/proteinase K method was increased by approximately two-fold and the 8-oxodG/dG ratios from the commercial calf DNA exposed to room air were 31-fold higher.Additionally, the DNA yields of the eight samples were significantly different; the largest DNA yield was 7.57 µg/mg, and the smallest DNA yield was 0.19 µg/mg.The 8-oxodG levels in 2 samples could not be detected by HPLC-MS/MS, although Table 2 showed that sufficient amounts of DNA were used.One-hundred micrograms of commercial calf thymus DNA was hydrolyzed with 1 µg, 5 µg, 10 µg or 20 µg of nuclease P1 and 1 unit of alkaline phosphatase at 5 different temperatures and incubation conditions.The results from these experiments are shown in Table 3.These data revealed that the dG yields in Conditions 1, 2 and 3 were higher than those in Conditions 4 and 5.Second, the DNA was hydrolyzed at similar levels with all concentrations of nuclease P1 in these three conditions.Third, the 8-oxodG/dG ratios in Conditions 1 and 3 were less than those in Conditions 2 and 5.This indicated that Conditions 1 and 3 resulted in reduced 8-oxodG production compared to Conditions 2 and 5.A previous study supported the use of Condition 1 to hydrolyze the DNA for 8-oxodG measurements , and a decision was made to hydrolyze the DNA using Condition 1, in which the DNA was incubated with 2 µg of nuclease P1 and 1 unit of alkaline phosphatase for 1 h at 50 °C.Among the total 54 HCC cases, both non-malignant liver tissues and malignant tissues are available for some cases, while only non-malignant liver tissues or malignant tissues are available for other cases.Each tissue was used to measure the 8-oxodG and dG levels 1–6 times, and then, the 8-oxodG/dG ratios in each tissue were averaged.The 8-oxodG/dG ratios were first transformed to logarithms."The distribution of the logarithms of the 8-oxodG/dG ratios in the malignant and non-malignant liver tissues was analyzed for normality with the D'Agostino and Pearson omnibus normality test.Only the distributions of the logarithms of the 8-oxodG/dG ratios in the non-malignant tissues and malignant tissues from Australian HCC patients were normal, while those of the total HCC cases or the Southern African HCC patients were not.Therefore, statistically significant differences between the logarithms of the 8-oxodG/dG ratios in the non-malignant liver tissues and those in the malignant tissues of Australian HCC patients were evaluated with the unpaired t test, while those of the total cases and the Southern African HCC patients were tested with the Mann–Whitney U test.The statistical analysis showed that there were significant differences between the logarithms of the 8-oxodG/dG ratios in the non-malignant tissues and those in the malignant tissues of Southern African HCC patients, while there were no significant differences in the total cases or Australian HCC patients.In the previous sections, the logarithms of mixtures of the 8-oxodG/dG ratios in all non-malignant liver tissues were compared to those of all malignant tissues.In this section, the logarithms of the 8-oxodG/dG ratios in non-malignant liver tissues were compared to those in malignant tissues from the same HCC patients."The distributions of the logarithms of the 8-oxodG/dG ratios in non-malignant and malignant liver tissues from the same HCC patients were analyzed for normality with the D'Agostino & Pearson omnibus normality test.The results showed that the distributions of the logarithms of the 8-oxodG/dG ratios in non-malignant liver tissues and malignant tissues of Australian HCC patients were normal, while those in the total HCC patients or Southern African HCC patients were not normal.Therefore, the statistically significant differences between the logarithms of the 8-oxodG/dG ratios in non-malignant liver tissues and those in malignant tissues of Australian HCC patients were compared using the paired t test, while those in the total HCC patients and Southern African HCC patients were compared using the Wilcoxon matched-pairs signed rank test.The statistical analysis showed that there was a significant increase in this ratio in the non-malignant liver tissue of Southern African HCC patients, while there was no significant difference in the total cases or Australian patients,When the HCC patients were separated according to the presence of cirrhosis, the normalities of the logarithms of the 8-oxodG/dG ratios in the non-cirrhotic liver tissues and cirrhotic liver tissues of 31 HCC patients were evaluated with the D’Agostino and Pearson omnibus normality test.The results showed that the distributions of the logarithms of the 8-oxodG/dG ratios in non-cirrhotic liver tissues the cirrhotic liver tissues of Australian HCC patients were normal, but those of the total HCC cases or Southern African HCC patients were not normal.Therefore, the statistically significant differences between the non-cirrhotic and cirrhotic liver tissues of Australian HCC patients were tested with the unpaired t test, while the differences in the total HCC patients and Southern African HCC patients were compared with the Mann–Whitney U test.The results demonstrated that there was no significant difference between the 8-oxodG/dG ratios in the cirrhotic and non-cirrhotic liver tissues from the total HCC patients, Australian HCC patients or Southern African HCC patients.The logarithms of the 8-oxodG/dG ratios in patient groups classified according to their underlying liver disease were not normally distributed.The logarithms of the 8-oxodG/dG ratios in malignant and non-malignant liver tissue in various chronic liver diseases were compared with the Mann–Whitney U test.This analysis revealed that there were no significant differences between the logarithms of the 8-oxodG/dG ratios in non-malignant liver tissues and those in malignant tissues in these liver diseases.In this study, there was a trend towards increased 8-oxodG/dG ratios in non-malignant liver tissues compared to malignant liver tissues, although the difference was not significant.This trend was most obvious in Southern African patients.Nevertheless, the data do not support the hypothesis that the hepatic 8-oxodG/dG ratios are increased in non-malignant liver tissues compared to HCC.The 8-oxodG/dG ratios found in this study had skewed distributions for most of the analyzed groups, with most groups having clustered 8-oxodG/dG values, and a small number of outlier samples were widely separated from the group.At least one previous study has observed that the 8-oxodG levels markedly increase in non-malignant liver tissue compared to the tumors in patients with HCC.For example, the difference between malignant liver tissue and tumors was observed in both American and Southern African patients with HCC .Another study has shown that the 8-oxodG levels in non-malignant liver tissues are significantly increased compared to the corresponding HCC tissues of the same subject and that the 8-oxodG levels are significantly increased in non-malignant tissues with moderate inflammation compared to those with mild or no inflammation .A positive correlation between the 8-oxodG concentration in non-malignant liver tissue and serum alanine aminotransferase activity has also been observed .However, several previous studies have revealed that the 8-oxodG levels are decreased in non-malignant livers compared to the malignant liver tissues from the same patient.For example, 8-oxodG concentrations have been reported to be decreased in the cancer-free surrounding tissues compared to the malignant lung tissues .A number of studies have shown that 8-oxodG is the main ROS- and RNS-oxidized form of DNA base; moreover, 8-oxodG is a pre-mutagenic agent, and chronic liver inflammation leads to the production of ROS and RNS .This means that the presence of increased 8-oxodG levels in liver tissues adjacent to HCC implicates 8-oxodG as a link between chronic hepatic inflammation and hepatic carcinogenesis; the oxidative DNA damage in chronic hepatic inflammation may lead to the malignant transformation of hepatocytes.However, this study identified an increasing trend in the 8-oxodG/dG ratio in non-malignant liver tissues compared to malignant tissues of HCC patients, particularly in HCC patients with HBV, but these trends were not very significant."This study also revealed that there was no significant difference in the 8-oxodG/dG ratios between the cirrhotic and non-cirrhotic liver tissue of patients with HCC or among HCC patients with various risk factors, including HBV, HCV, alcohol, Allagile's syndrome, and haemochromatosis.Moreover, most comparisons of the 8-oxodG/dG ratios showed that there were no significant differences between the patient groups and tissue types.Previous reports show that the 8-oxodG/dG ratios in non-malignant liver tissues and malignant HCC tumors were affected by the formation, oxidation and deletion of 8-oxodG in inflammatory tissues.In liver inflammation, the activated immune system generates excessive ROS and RNS, which can not only form 8-oxodG but can also decompose 8-oxodG.Recent studies have shown that 8-oxodG easily reacts with ONOO− , 1O2 and Fe2+ to form secondary oxidative products, as it possesses a lower redox potential than guanosine .For example, 8-oxodG is oxidized by ONOO−, at least 1000 times faster than G .DNA repair enzymes can also remove 8-oxodG .Recent reports also revealed that 8-oxodG is not a single pro-mutagenic agent in hepatocarcinogenesis.To date, more than 100 DNA lesions in addition to 8-oxodG have been identified , and other various oxidative products of bases and secondary oxidative products of 8-oxodG are also pro-mutagenic agents .Some oxidative products of bases have higher mutation frequencies.For example, the frequencies of the G to T conversion for oxazolone and spiroiminodihydantoin are far higher than those of 8-oxodG .Based on above observations, the 8-oxodG/dG ratio did not reflect a correlation between oxidative DNA damage in chronic liver inflammation and hepatocarcinogenesis.In this study, there was no statistically significant difference in the 8-oxodG/ dG ratios between non-malignant and malignant liver tissue from Australian patients or between cirrhotic and non-cirrhotic liver tissues of patients with HCC."The 8-oxodG/dG ratios in non-malignant liver tissues of HCC patients with HBV were significantly increased compared to the tumors in these patients, but significant differences were not found in HCC patients with other chronic diseases, including alcohol, Allagile's syndrome, and haemochromatosis.Most comparisons of the 8-oxodG/dG ratios showed that there were no significant differences between the patient groups and tissue types.There are may be several reasons for these observations.One possible reason was that we used an insufficient number of tissues and cases, which may have affected these comparisons.The tissues from the Australian patients were extremely valuable and in limited supply."Additionally, there were only 4 cases of HCC patients with HCV, 4 cases of HCC patients with alcohol, and 1 case of an HCC patient with Allagile's syndrome.This, together with the limited availability of sample material in some groups, made comparisons of the 8-oxodG/dG ratios difficult.The second reason is that there is no correlation between the malignant grades and these risk factors.For example, one study found that there was no positive correlation between the hepatic iron content and 8-oxodG concentration .The hepatic Fe and hepatic 8-oxodG levels are not correlated .However, other studies have shown a link between alcohol, iron and 8-oxodG.The third reason is that there were variations in the degree of inflammation/liver injury; moreover, the presence of parenchymal elements, such as connective tissue and blood vessels, in different tissue samples may have impacted the final DNA yield and the 8-oxodG/dG ratio.Previous studies have indicated that the production of artefactual 8-oxodG caused by high temperature, incomplete hydrolysis of DNA and imprecise 8-oxodG measurements lead to a lack of significant differences in the 8-oxodG/dG ratios between non-malignant liver tissues and tumoural tissues.This study attempted to remove these confounds by optimizing the conditions for DNA isolation, DNA hydrolysis and the 8-oxodG measurements.During the optimization of the DNA extraction procedure, there were problems in the earliest experiments, with large variations in the DNA yield between samples, and there was a failure to identify 8-oxodG by HPLC.This was because the tissues were not completely homogenized or the DNA was lost during the procedures.Thereafter, special care was taken so that the tissues were homogenized gently to avoid rupturing the nuclear membrane, which resulted in the loss of the DNA during the nuclei precipitation step.The homogenization tube was washed with a small amount of homogenization buffer two or three times and the residual homogenized solution was collected into the sample.If large amounts of lipid were present during centrifugation, the centrifugation force was increased to pellet the nuclei.The crude nuclei pellets were dissolved in cold 4 M GTC, which suppressed any of the DNase enzymes present in the solution that otherwise would have hydrolyzed the DNA .The nuclei pellets were purified with Sevag solution.Sevag solution consisted of 1 volume of isoamyl alcohol and 24 volumes of chloroform.The isoamyl alcohol reduced foaming, aided the separation, and maintained the stability of the layers of the centrifuged, deproteinized solution.Chloroform causes surface denaturation of proteins.It is worth noting that chloroform could react with the microcentrifuge tubes, resulting in sample leakage during centrifugation.To avoid this problem, the DNA solution was immediately transferred to a fresh tube after centrifugation.During DNA extraction, PLG tubes were used to separate the organic phase containing proteins and the aqueous phases containing DNA.One difference in the PLG tubes compared to normal centrifuge tubes is that these tubes contain a phase block gel that can separate aqueous and organic media based on their density differences.After centrifugation, the denatured protein and organic solutions are effectively trapped in the lower organic phases of the PLG by the gel, while the DNA remains in upper aqueous phases and can be easily removed with a pipette.The use of the PLG tubes resulted in a DNA-containing phase that could be easily pipetted, resulting in the recovery of 20 to 30% more nucleic acid than with traditional methods.After cold isopropanol was added to the solution, DNA rapidly precipitated out of solution as a stringy gelatinous clump, unless it was sheared.If the DNA was sheared, it was precipitated by placing the tube in a −20 °C freezer from 20 min to overnight.Table 2 showed that the DNA yields exhibited large variations among samples, in which the highest DNA yield was 7.57 µg/mg, whereas the smallest DNA yield was 0.19 µg/mg.Obviously, these data did not represent the actual DNA concentrations.This was because the DNA precipitate was incompletely dissolved.Large DNA aggregates remaining in the tube produced abnormally high or low UV absorbance readings, leading to an erroneous DNA concentration.To avoid this problem, the DNA precipitate was carefully dissolved by pipetting it repeatedly to obtain a homogeneous preparation.Additionally, the 8-oxodG levels in 2 samples could not be detected by HPLC-MS/MS.One reason could be that insufficient DNA concentrations were used, although the data showed that their DNA concentrations were very high.As mentioned in a previous report, if 1 mg of human liver tissues yields approximately 1 µg of DNA , 50 mg of tissue would yield 10 fmol of 8-oxodG, which is close to the detection limit of 7.5 fmol for HPLC-MS/MS.Therefore, if the tissues were not completely homogenized or DNA was lost during an extraction, 8-oxodG could not be detected by HPLC-MS/MS.Therefore, great care was taken to ensure that as much tissue was available as possible.In this study, to determine the effects of temperature on the formation of 8-oxodG during DNA extraction and hydrolysis, human liver DNA was extracted with the cold M GTC method and a warm RNase A/proteinase K method.These DNA samples, together with commercial calf DNA as a control, were hydrolyzed and the 8-oxodG levels were measured by HPLC-MS/MS.The results indicated that the cold 4 M GTC method produced the lowest 8-oxodG yield, while the levels from the same liver tissue using the warm RNase A/proteinase K method were increased approximately two-fold, and the 8-oxodG/dG ratios from the commercial calf DNA exposed to room air were increased 35-fold.These results indicated that the low temperature DNA extraction method reduced the formation of 8-oxodG during the procedure.In addition, the much higher 8-oxodG/dG values in commercial calf DNA compared to those of the DNA that had been freshly extracted using both the cold M GTC method and the warm RNase A/proteinase K may have resulted from the exposure of the DNA to the oxygen in air during the longer incubation at room temperature, stressing the need for refrigeration of the extracted DNA and minimizing its exposure to air prior to analysis.Therefore, the cold 4 M GTC method was used to extract the DNA from the human liver tissues in this study.Additionally, samples were stored in a −80 °C freezer, and the tissues were dissected and homogenized and DNA was extracted in a cold room.All other procedures were performed in a cold room as much as possible.The dG yields from 100 µg of commercial calf thymus DNA that was hydrolyzed with 1 µg, 5 µg, 10 µg or 20 µg of nuclease P1 and 1 unit of alkaline phosphatase under Conditions 1, 2 and 3 were more similar and larger than those under Conditions 4 and 5.This indicated that the dG concentrations were not correlated to those of nuclease P1 under Conditions 1, 2 and 3.This observation was consistent with a previous study in which 10 to 25 µg of nuclease P1 did not increase DNA hydrolysis .Furthermore, 1 µg of nuclease P1 completely hydrolyzed 100 µg of commercial calf thymus DNA under Conditions 1, 2 and 3.Conditions 4 and 5 produced less dG and 8-oxodG compared to Conditions 1, 2 and 3, but the 8-oxodG/dG ratios were still high.This indicated that incomplete DNA hydrolysis led to an overestimation of the 8-oxodG levels.Condition 4 produced incomplete hydrolysis due to the short hydrolysis time, in which nuclease P1 hydrolyzed DNA for only 10 min, whereas the time increased to 1 h under Conditions 1, 2 and 3 and 0.5 h under Condition 5.The incomplete DNA hydrolysis under Condition 5 was due to the low temperature.A previous study has also revealed that DNA hydrolysis is poor below 37 °C .Condition 2 produced more 8-oxodG than Conditions 1, 3, 4 and 5.This indicated that high temperature produced 8-oxodG.In summary, 1 µg of nuclease P1 completely hydrolyzed 100 µg of DNA in 1 h at 50 °C.However, this hydrolysis was not complete with a short incubation time even at 65 °C.An additional concern with heating the samples to 100 °C during hydrolysis was that this appeared to lead to the formation of artefactual 8-oxodG.Conditions 1 and 3 produced similar amounts of dG and 8-oxodG, but there was an additional hour-long incubation at 37 °C for hydrolysis by alkaline phosphatase.The longer hydrolysis time and room temperature increased the risk of forming artefactual 8-oxodG.For these reasons, the DNA of the HCC patients in this study was hydrolyzed with 1 unit of alkaline phosphatase and 1 µg of nuclease P1 for one hour at 50 °C.The experiments were limited to approximately 50 mg of liver tissue per sample.This was necessary because of the scarcity of these tissues.This amount of tissue gave a yield of approximate 10 fmol of 8-oxodG, which was close to the detection limit of 7.5 fmol for HPLC-MS/MS.If tissues were not completely homogenized, DNA was lost during an extraction step, or DNA was incompletely hydrolyzed, 8-oxodG could not be detected by HPLC-MS/MS.Therefore, great care was taken to ensure that as much tissue was available as possible.Second, the temperature at which the samples were processed required careful control because high temperatures produced artefactual 8-oxodG values.In this experiment, the samples were stored in a −80 °C freezer, and the tissues were dissected and homogenized and DNA was extracted in a cold room.All other procedures were performed in a cold room, as much as possible; however, the DNA was hydrolyzed at 50 °C and the 8-oxodG levels were measured by HPLC-MS/MS at room temperature.These temperatures could potentially produce artefactual 8-oxodG.It is not known how much 8-oxodG was produced during the experimental procedures and whether this influenced our results.Third, it is not possible to determine whether the DNA was completely hydrolyzed with nuclease P1 and alkaline phosphatase, even if all of the procedures are implemented to allow complete DNA hydrolysis.If the DNA is not completely hydrolyzed, the ratios of 8-oxodG/dG will be altered.These temperatures could potentially produce artefactual 8-oxodG.It is not known how much 8-oxodG was produced during the experimental procedures and whether this influenced our results.Third, it is not possible to determine whether the DNA was completely hydrolyzed with nuclease P1 and alkaline phosphatase, even if all of the procedures are implemented to allow complete DNA hydrolysis.If the DNA is not completely hydrolyzed, the ratios of 8-oxodG/dG will be altered.Therefore, compared to previous studies, this experiment was performed very carefully to reflect the actual 8-oxodG values.This study revealed that the 8-oxodG/dG ratios tended to be higher in most non-malignant liver tissues than those in HCC tissues, although this was not statistically significant.It also appeared that the ratio was higher in the non-malignant liver tissue from Southern African patients, but there was no difference in the 8-oxodG/dG ratios between non-malignant liver tissues and tumors of Australian HCC patients.Additionally, this study also revealed an increasing trend for 8-oxodG/dG ratios in non-malignant liver tissues compared to tumoural tissues of patients with HBV.These findings confirmed that there was an association between oxidative DNA damage and chronic liver inflammation, but there was no dose-dependent relationship between the 8-oxodG levels and hepatocarcinogenesis.Due to the limitations of this study, significant differences in the 8-oxodG/dG ratios between the non-cirrhotic and cirrhotic non-malignant liver tissues were not observed.The methods used in these experiments were suitable for measuring the 8-oxodG levels from human liver tissues.The cold 4 M GTC method produced a sufficient amount of DNA from 50 mg of the tissues used for the analysis.Fifty micrograms of DNA were sufficiently hydrolyzed using 1 µg of nuclease P1 and 1 unit of alkaline phosphatase.The levels of artefactual 8-oxodG produced using the optimized methods in this study were demonstrated to be lower than those produced using the methods described in previous studies.Despite these limitations, this study more thoroughly controlled for artefactual 8-oxodG/dG production and more accurately reflects the actual 8-oxodG/dG levels in the samples than previous studies.
The aim of this study was to examine the role of oxidative DNA damage in chronic liver inflammation in the evolution of hepatocellular carcinoma. The accumulated data demonstrated that oxidative DNA damage and chronic liver inflammation are involved in the transformation of normal hepatocytes and their evolution towards hepatocellular carcinoma. However, the levels of 8-oxy-2'-deoxy-guanosine (8-oxodG), a biomarker of oxidative DNA damage, were overestimated and underestimated in previous reports due to various technical limitations. The current techniques are not suitable to analyze the 8-oxodG levels in the non-malignant liver tissues and tumors of hepatocellular carcinoma patients unless they are modified. Therefore, in this study, the protocols for extraction and hydrolysis of DNA were optimized using 54 samples from hepatocellular carcinoma patients with various risk factors, and the 8-oxodG and 2'-deoxyguanosine (dG) levels were measured. The patients enrolled in the study include 23 from The Princess Alexandra Hospital and The Royal Brisbane and Women's Hospitals, Brisbane, Australia, and 31 from South Africa. This study revealed that the 8-oxodG/dG ratios tended to be higher in most non-malignant liver tissues compared to hepatocellular carcinoma tissue (p=0.2887). It also appeared that the ratio was higher in non-malignant liver tissue from Southern African patients (p=0.0479), but there was no difference in the 8-oxodG/dG ratios between non-malignant liver tissues and tumors of Australian hepatocellular carcinoma patients (p=0.7722). Additionally, this study also revealed a trend for a higher 8-oxodG/dG ratio in non-malignant liver tissues compared to tumoural tissues of patients with HBV. Significant differences were not observed in the 8-oxodG/dG ratios between non-cirrhotic and cirrhotic non-malignant liver tissues.
442
Dual effects of brain sparing opioid in newborn rats: Analgesia and hyperalgesia
Unrelieved pain in the term and preterm neonate initiates maladaptive plasticity that can persist later in life.Opioids can prevent this plasticity while providing analgesia.There are concerns, however, that opioids have unwanted effects on the immature brain.For instance preemies who received opiates in the neonatal intensive care unit, can develop a smaller head-circumference, lower body weight, short-term memory impairments, and difficulty socializing.In animal models, administrating opioids during the post-natal period leads to altered mu-opioid receptors expression in the forebrain, and increased pain behavior later in life.Given that opioids are effective analgesics for acute pain, a possible strategy is to use brain sparing opioids in the newborn.To explore this approach we chose the brain sparing MOR agonist loperamide.Loperamide produces analgesia in adult models of inflammatory, cancer, and neuropathic pain by acting on the peripheral opioid receptors.Accordingly, MORs in the periphery are critically involved in the analgesic effects of opioids.Since there is a greater expression of MORs in primary sensory neurons during the first 2 post-natal weeks, we postulated that newborns would be ideal candidates for loperamide induced antinociception.We tested loperamide in newborn rats, which are developmentally similar to premature humans.We first assessed the effects of loperamide on the nociceptive withdrawal threshold in normal newborns, and then in newborns with an inflamed hind paw after a local carrageenan injection.We then determined if loperamide crosses the blood–brain barrier of the neonate rat.Finally, given that brain penetrant opioids can produce pro-nociceptive effects, we tested the effect of daily loperamide on the nociceptive threshold, the peripheral neuronal activity using patch clamp recordings, and the CNS activity using Fos immunochemistry.Male and female Sprague-Dawley rats, post-natal day 3 at the start of the experiment, were studied.Pups were kept with their littermates and mother in a dedicated room with alternating 12 h of light-dark cycle.Food and water were available ad libitum.For each experimental group, 8–10 pups were used.No adverse effects of loperamide were observed during the experiment.Procedures for the maintenance and use of the experimental animals conformed to the regulations of UCSF Committees on Animal Research and were carried out in accordance with the guidelines of the NIH regulations on animal use and care.The UCSF Institutional Animal Care and Use Committee approved the protocols for this study.Loperamide and chemicals were purchased from Sigma-Aldrich unless noted otherwise.For acute experiments, a single dose of loperamide 1 mg/mL or equal volume of vehicle was administered 30 min before carrageenan in the left hind paw.This preemptive analgesia mimics protocols promoting early interventions in the NICU to prevent the long-term effects of untreated pain.Prior to the injection of carrageenan, but not prior to loperamide, rats were tested for the baseline thermal withdrawal latency.In preliminary experiments we observed that loperamide 1 mg/kg did not increase the withdrawal latency in the Hargreaves test.We also found that decreasing the number of heat exposures in neonates minimizes the risk of stimulus induced paw sensitization.Rats were then retested at 5 min, 30 min, 1 h and 4 h after the carrageenan injection.For chronic experiments, loperamide was administered once daily starting at P3 lasting until P7.Hind paw withdrawal latency to the heat stimulus was evaluated everyday starting on the first day prior to the initial dose of loperamide and then daily 6 h after each injection.This delay of 6 h, between the loperamide injection and the Hargreaves test, ensured that the nociceptive threshold was measured when the plasma levels of loperamide were high.Testing animals immediately prior to the daily injection of loperamide might have also showed hyperalgesia, whereas it could have been part of an early opioid withdrawal instead.On each day, after pups were administered loperamide or tested, they were immediately returned to the dam.Precautions were taken to ensure that none of these newborns were rejected by their mother.During all manipulations and testing procedures, care was taken to maintain body temperature constant.Control animals received the same volume of vehicle on the same schedule.After the last dose of loperamide or vehicle, pups were randomly injected with carrageenan or saline in the left hind paw.Their lumbar spinal cord was collected and processed for Fos immunocytochemistry 3 h later.An investigator blind to the treatment groups performed the behavioral studies.Heat pain latency was measured using the Hargreaves plantar test device.Rats were placed into the test area 60 min prior to testing.The glass plate on which they were free to move was preheated to 30 °C to keep them comfortable.The withdrawal latency from a heat stimulus was measured 3 times for each hind paw, with a 5-min interval between individual measures.The mean value in seconds was used as the thermal nociceptive threshold.Although never reached, a cutoff of 20 s was used to prevent skin damage.To assess for possible penetration of loperamide in the CNS, we determined the concentration of loperamide in the CSF in P3 rats using mass spectroscopy.Serum levels were also determined by the same method.Based on a plasma half-life of 9–13 h, a time to peak plasma concentration of 2.5 to 6 h, and a duration of action of up to 3 days, CSF and blood samples were acquired 6 h after a high dose of loperamide.CSF was obtained by puncture of the dura overlying the cisterna magna using an operating microscope and a pulled glass capillary pipette while the animals were under hypothermic anesthesia.Care was taken to make sure that the CSF was not contaminated by blood.Collection of blood was done by cardiac puncture into a 1.5 mL tube containing EGTA.The blood was spun down at 1500 g for 10 min in a refrigerated centrifuge.The supernatant was collected into a clean tube.CSF and serum samples were kept at −20 °C prior to analysis.Serum and CSF loperamide levels were determined by liquid chromatography-tandem mass spectrometry using Agilent LC 1260-AB Sciex 5500.Each analyte was ionized using electrospray ionization in the negative mode and monitored by multiple reactions.The serum and CSF were prepared for LC-MS/MS analysis by solid phase extraction using Waters Oasis HLB cartridge.Each cartridge was washed with 5 column volumes of methanol prior to activation with water for loading of serum or CSF.The column was washed with 1 mL 5% methanol before each analyte was eluted with 1 mL of methanol.The eluates were evaporated under a stream of nitrogen gas after which they were reconstituted in 10% methanol for column injection.A 5 μl aliquot of the extract was used for each replicate injection of the sample.Chromatographic separation of the analytes was achieved by gradient elution using MeOH/H2O + 10 mM ammonium acetate + 0.1% acetic acid as solvent A, and MeOH/H2O + 5 mM ammonium acetate + 0.1% formic acid as solvent B.The elution gradient employed was 0–0.5 min = 30% B; 0.5–1 min = 75% B; 1–4 min = 100% B; 4–5.5 min = 100% B; and 5.5–6 min = 30% B.The analytes had a quantitation limit of 0.1 ng/mL.Data analysis was conducted using AB Sciex Analyst 1.6 and AB Sciex MultiQuant 2.1 software packages.The method for intact DRG recordings has recently been described.This method preserves the neuroglial interactions as well as the afferent and efferent axons to obtain data closer to in vivo conditions.On P7 neonatal rats were euthanized and the spines were quickly removed.The spines were then placed into ice-cold carbogenized artificial CSF.The aCSF contained: 124 mM NaCl, 2.5 mM KCl, 1.2 mM NaH2PO4, 1.0 mM MgCl2, 2.0 mM CaCl2, 25 mM NaHCO3, and 10 mM glucose.Laminectomies were performed and the spinal cords were removed.Following this, the DRGs were collected under the dissection microscope.Each DRG was transferred to a recording chamber after the surrounding connective tissue was removed.There, it was perfused with aCSF at a rate of 2–3 mL/min.A small area of the collagen layer on the surface of each DRG was digested to expose neurons to the recording pipette.For this purpose we used the enzyme mix “Liberase”.A fine mesh anchor was used to anchor down the DRGs during recordings.DRG neurons were visualized with a 40X water-immersion objective using a microscope equipped with infrared differential interference contrast optics.The image was captured with an infrared-sensitive CCD and displayed on a black and white video monitor.Currents were recorded with an Axon 200B amplifier connected to a Digidata interface and low-pass filtered at 5 kHz, sampled at 1 kHz, digitized, and stored using pCLAMP 10.2.Patch pipettes were pulled from borosilicate glass capillary tubing with a P97 puller.The resistance of the pipette was 4–5 MΩ when filled with recording solution which contained: 140 mM KCl, 2 mM MgCl2, 10 mM HEPES, 2 mM Mg-ATP, 0.5 mM Na2GTP, pH 7.4.Osmolarity was adjusted to 290–300 mOsm.After a gigaseal was established on a neuron, the membrane was broken and the cell was selected for further study if it had a resting membrane potential of less than −50 mV.The access resistance was 10–20 MΩ and continuously monitored.Data were discarded if the access resistance changed more than by 15% during an experiment.Small diameter neurons, i.e. dark neurons, were exclusively selected for patch clamp recordings.The size of the neurons was determined by measuring the diameter on the screen.The DAB method was used to label Fos positive cells in the lumbar spinal dorsal horn.Three hours after an intradermal injection of 20 μl 1% carrageenan or vehicle, rats were perfused intracardially with physiologic saline, followed by 10% formalin, pH 7.4.The L3-5 spinal cord segments were processed for immunostaining.The samples were post-fixed for 2 h and then placed in a 30% buffered sucrose solution overnight.Ten micron transverse sections were cut with a cryostat.Fos immunostaining was then performed.Briefly, after blocking by 10% normal goat serum in phosphate buffered saline with 0.3% Triton X-100 for 1 h at room temperature, the sections were incubated with the anti-Fos primary antibody for 24 h at 4 °C.The sections were then rinsed and incubated with biotinylated anti-rabbit secondary antibody for 1 h at room temperature.The sections were rinsed again and incubated with Extravidin for 1.5 h.Then a DAB kit was used for final staining of Fos.Sections were put under the dissection microscope for visual determination of the reaction time.We used ultra pure water to end the reaction.The sections were dehydrated and covered for further analysis.Counts of Fos-labeled cells were made on 6 randomly selected lumbar spine sections for each rat.The investigator responsible for plotting and counting the labeled cells was blind to the drug treatment of individual animal.The superficial dorsal horn of the spinal cord was identified using dark-field illumination.A nucleus was counted as Fos positive if it was entirely filled with black reaction product.Based on nuclear size, cell shape, and extensive experience of our laboratory with this technique, we determined that the Fos positive cells counted were neurons.All results are presented as the mean ± SEM.For the analysis of thermal threshold, repeated-measures one-way ANOVA followed by Bonferroni post hoc or Student’s t-test were used."For patch clamp recordings, the Student's t-test was used.For the Fos labeled cell counts, statistical comparisons were performed using the Student’s t test to compare the means between groups.Differences between means were considered statistically significant at P < .05.Based on preliminary experiments and data from Guan and colleagues, we used 1 mg/kg, s.c. of loperamide for behavioral experiments.The average body weight of pups was 9 ± 0.5 g at P3, 11.1 ± 0.6 g at P4, 14.9 ± 0.4 g at P5, 17.2 ± 0.6 g at P6, and 19.8 ± 0.5 g at P7.Administration of loperamide for 5 consecutive days did not affect the body weight when comparing with the standard growth curve.A single injection of loperamide did not prolong the paw withdrawal latency compared to vehicle in the Hargreaves test.This is consistent with the effect of 1 mg/kg of morphine in adult rats submitted to the Hargreaves test.Subsequent intraplantar injection of carrageenan, to produce a local inflammation, exposed the antinociceptive effect of loperamide.Rats were injected with carrageenan 1% in the left hind paw and were tested 5 min, 30 min, 1 h, and 4 h later.At 5 min, vehicle treated rats had a marked decreased withdrawal latency from 6.5 ± 0.4 s to 1.9 ± 0.3 s.Loperamide produced significant antinociception at all time points, with some remaining nociception at 5 min when comparing pre- vs. post-carrageenan withdrawal latencies.To determine if loperamide penetrates the BBB after systemic administration, we injected a single dose of 5 mg/kg, s.c. and measured the concentration of loperamide in the serum and the CSF 6 h later using mass spectrometry.The serum concentration of loperamide was 334.7 ng/mL while in the CSF concentration was only 6.9 ng/mL of loperamide, which is about 50 times less.To further investigate whether loperamide could induce opioid induced hyperalgesia in the neonates, just as morphine does, we administered loperamide daily from P3 to P7 and performed daily Hargreaves plantar tests.Compared to rats receiving vehicle, those receiving loperamide did not show any significant change in the withdrawal latency from the nociceptive stimulus during the first 3 days.However, starting at P6, the loperamide group exhibited a significantly decreased latency with an average latency of 5.8 ± 0.3 s compared to the vehicle group with an average latency of 7.8 ± 0.4 s.This difference between the two groups was accentuated on P7 with an average withdrawal latency of 5.1 ± 0.6 s for the loperamide group vs. 7.6 ± 0.3 s for the vehicle group.The lumbar DRGs from the above rats were then collected on P7.Patch clamp recordings on small diameter neurons were conducted on whole DRGs as we previously reported.DRG neurons from the loperamide group demonstrated an average current threshold of 145.1 ± 15.1 pA, which was significantly lower than that of the vehicle treated group.Also, in the loperamide group the membrane threshold was reduced to –22.1 ± 3.9 mV from −14.1 ± 2.7 mV.Finally, we sought to determine if loperamide induced hyperexcitability of primary sensory neurons would result in a greater activation of spinal superficial dorsal horn neurons, where primary nociceptive afferents terminate.Rats were treated with loperamide or vehicle for 5 days.Six hours after the last dose of loperamide or vehicle, carrageenan 1% or its vehicle was injected in the left hind paw, and rats were euthanized 3 h later.We did not do a heat latency testing prior to euthanasia to avoid a second stimulus, which would have been a confounding variable.The lumbar spinal cords were collected and processed for Fos immunocytochemistry.Fos positive cells were counted in the four different groups.The increased in the number of immunopositive cells was localized mainly in the superficial dorsal horn.There, in vehicle-carrageenan treated rats Fos neuronal count was 25.1 ± 6.1 per section.However, in loperamide-carrageenan treated rats Fos was seen in twice as many neurons: 52.4 ± 7.5 per sections.Lastly, in both vehicle and loperamide treated rats, when saline instead of carrageenan was injected in the left hind paw much fewer Fos expressing cells could be detected: 2.3 ± 1.1 and 2.4 ± 3.1 cells per section respectively.Our results show that a peripherally acting opioid is antinociceptive in the newborn rat, and induces OIH within a few days with continued administration.We suggest that these effects of loperamide are enabled by an high expression of MORs in primary sensory neurons during the first 2 post-natal weeks.We chose loperamide because it is a MOR agonist which does not cross the BBB.It is also antinociceptive in models of inflammatory, cancer, and neuropathic pain.Clinically, loperamide is used mainly to treat traveler’s diarrhea and has been listed as one the fundamental drugs by the World Health Organization.Only a few reports suggest a possible analgesic effect in humans, for instance when it is applied topically.Limited data is found on its use during neonatal period and mainly pertains to the treatment of short bowel syndrome in the NICU.In rodents and human neonates, a therapeutic dose of loperamide should produce antinociception essentially through the periphery given that the BBB is formed and functional.In agreement, we measured only 6.9 ng/mL of loperamide in the CSF of P3 rats after a high systemic dose.While statistically significant, such a low CSF concentration is unlikely to be antinociceptive since, in adult rats, at least 30 μg of intrathecal loperamide is needed to produce significant antinociception.Given that the total volume of CSF in an adult rat is approximately 275 μl, a dose of 30 μg of loperamide should result in a CSF concentration of about 109 ng/mL, which is 15 times greater than the 6.9 ng/mL observed here.Because the CSF dosage was obtained after a dose 5 times greater than that needed to produce a robust antinociceptive effect, we suggest that the systemic dose of loperamide of 1 mg/kg in our protocol was too low to alter the pain behavior through a direct effect on the CNS.Within days of receiving loperamide, neonates exhibited a decreased nociceptive latency, which is characteristic of OIH.This was an unexpected result and to our knowledge the first demonstration of OIH following the administration of a peripherally acting opioid.Peripherally mediated OIH, however, was recently reported in adult mice, after the combined administration of morphine and a peripherally acting opioid antagonist blocked the appearance of hyperalgesia from daily morphine.By using conditional knockout mice for MOR in TRPV1 neurons, the authors also concluded that DRGs nociceptive neurons are critical in the appearance of morphine associated hyperalgesia and tolerance.This is consistent with our results showing that peripheral sites are involved in OIH.Tolerance to loperamide, in turn, was previously reported for adult rats, which is significant since it shares some, but not all, cellular mechanisms with OIH.Also, OIH has been previously reported in neonates after repeated administration of the brain penetrant opioid morphine.Repeat administration of opioids profoundly affects peripheral neuronal physiology in adult rats.Patch clamp recordings in newborns showed similar hyperexcitability in DRG neurons, which is in agreement with our behavioral data.After 5 days of loperamide treatment, small diameter DRG neurons from P7 rats displayed increased excitability and decreased threshold.These results confirm previous studies showing that primary sensory neurons are involved in OIH.In loperamide treated rats, the increased Fos expression in the superficial dorsal horn suggests that the OIH associated increased excitability of primary sensory neurons leads to central sensitization.In fact, OIH is generally seen as a form of central sensitization, involving glutamate, dynorphins, descending facilitation, and greater response to nociceptive neurotransmitters.Recent studies suggest, however, that the peripheral nervous system is also involved.Notably, Corder and colleagues recently found that opioid induced long-term potentiation at the first synapse in the spinal cord was dependent on pre-synaptic MOR expressing nociceptive neurons.Opioid induced sensitization of peripheral nerve endings and DRG neurons would result from transcriptional changes and post-translational changes such as phosphorylation mediated relocalization and upregulation of receptors and ion channels.Ion channels would open more frequently and for longer times, causing increased nociceptive afferent activity.The sensitization of primary sensory neurons would result in an increase in synaptic transmission at the spinal level and ensuing plasticity.An opioid induced increase in activity of nociceptive afferents term and preterm neonate’s nervous system might shape sensory responses for life.Features that facilitate this plasticity in the newborn include: greater amounts of Ca++ permeable GluN2B containing NMDA receptors, a widespread distribution of NMDARs throughout the spinal dorsal horn, and increased responsiveness to glutamate.All these data indicate that despite their non-brain penetrant advantage, peripherally acting opioids might still have an impact on the CNS.Since MOR is involved in the initiation but not the maintenance of OIH, preventing or reversing the post-translational changes might allow maintaining the analgesic effects of opioids without OIH.This hypothesis, however, remains to be tested.We chose a peripherally acting opioid, both because neonates get significant antinociception and because the BBB can be impermeable to these drugs.Serendipitously we observed that within days loperamide treated rats developed OIH and central sensitization.These findings support the use of brain sparing opioids in the newborn.Strategies to avoid the hyperalgesic effect of peripheral opioids will need to be developed to transition to clinical trials.The authors have no conflicts of interest to declare.This work was supported by the Painless Research Foundation.
Effective pain management in neonates without the unwanted central nervous system (CNS) side effects remains an unmet need. To circumvent these central effects we tested the peripherally acting (brain sparing) opioid agonist loperamide in neonate rats. Our results show that: 1) loperamide (1 mg/kg, s.c.) does not affect the thermal withdrawal latency in the normal hind paw while producing antinociception in all pups with an inflamed hind paw. 2) A dose of loperamide 5 times higher resulted in only 6.9 ng/mL of loperamide in the cerebrospinal fluid (CSF), confirming that loperamide minimally crosses the blood–brain barrier (BBB). 3) Unexpectedly, sustained administration of loperamide for 5 days resulted in a hyperalgesic behavior, as well as increased excitability (sensitization) of dorsal root ganglia (DRGs) and spinal nociceptive neurons. This indicates that opioid induced hyperalgesia (OIH) can be induced through the peripheral nervous system. Unless prevented, OIH could in itself be a limiting factor in the use of brain sparing opioids in the neonate.
443
Electric-field-induced structural changes in multilayer piezoelectric actuators during electrical and mechanical loading
Piezoelectric actuators serve as system-enabling components in many applications.For example, they are the core element of modern energy-efficient fuel injection systems in automotive engines, and are also used in power harvesting, micro actuation, and vibration suppression .One of the leading types of piezo actuators in the market today are multilayer piezoelectric actuators) with interdigitated metallic electrodes.MAs allow the fabrication of large ceramic components that are operated at lower voltages compared to bulk monolithic structures, i.e., applied voltage is in the volt range versus kilovolt to achieve the desirable strain.This is accomplished by increasing the number and decreasing the thickness of individual piezoceramic layers.The majority of piezoelectric actuators are composed of lead zirconate titanate, PbO3, a binary compound that exhibits enhanced dielectric and piezoelectric properties near the morphotropic phase boundary .When discussing the performance of MAs utilizing MPB PZT, domain wall motion is often discussed as a dominant mechanism, whereas phase fraction changes are less often discussed.Applying high electric fields and/or mechanical loads to PZT enables domain wall motion, which modifies the electromechanical properties of the material .The existence of domain walls is dependent on the crystallographic structure of the material, and they have been widely studied using electrical property measurements, X-ray diffraction , piezoresponse force microscopy and transmission electron microscopy .The knowledge of domain behavior under external loads is essential for understanding macroscopic material behavior.The scientific community has made considerable efforts to study the effects of electrical and mechanical loading on bulk PZT ceramics.Bipolar strain and polarization hysteresis of soft lanthanum-doped PZT under compressive mechanical loads, i.e. up to 60 MPa, have been reported by Lynch .From this study, material depolarization, decreases in coercive field, and changes in the piezoelectric coefficient were observed in response to increasing applied compressive stress."The observed behavior in Lynch's study was attributed to ferroelastic domain reorientation.Chaplya and Carman provided a more detailed explanation of the observed response by conducting both bipolar and unipolar strain and polarization measurements with applied compressive loads on commercial PZT-5H samples .Unipolar measurements showed that, at intermediate compression loads, maxima in the strain and polarization are observed.The enhanced response was explained in terms of the available volume fraction of non-180° domains and the difference in domain wall pressure created by electrical and mechanical loading.These results suggested that the peak in enhanced strain and polarization response can shift to a higher stress value upon increasing the amplitude of the electric field, thus demonstrating a dependence on the balance between the electrical and mechanical energy.Ultimately, all previous studies share a common observation in the materials response: the enhancement in electric-field-induced strain under moderate compressive pre-stresses is attributed to non-180° domain wall motion.To separate the earlier described effects of mechanical constraints during application of electric field, in situ XRD coupled with strain and polarization measurements are needed."XRD adequately probes the various electromechanical responses of ferroelectrics and yields insight into the material's structure during applied stimuli .This work reports phase rearrangement and domain texture of commercial MAs, quantitatively measured using in situ high energy XRD coupled with macroscopic electromechanical response measurements.Unipolar polarization and strain measurements were synchronized to the measured XRD patterns to establish structure-property relations that describe the enhanced strain response seen in MAs under the application of compressive pre-stresses.The results show that, while domain switching does contribute to the properties, a portion of the enhanced response also originates from electric-field-induced phase transitions.Hence, this work establishes new structure-property relationships in PZT-based MAs that can be used to design actuators with enhanced electromechanical properties by tailoring electric-field-induced phase transitions and ferroelectric domain configurations.Commercially available Multilayer Piezoelectric Actuators were supplied by the company TDK.The MAs belonged to the latest generation of so-called “High-Active Stacks”, whereby the passive zone is minimized down to approximately 110 μm thickness.The MAs had an original size of 3.4 × 3.4 × 27 mm3 and were constituted by a stack of ∼70 μm-thick piezoceramic layers with interdigitated copper electrodes.The piezoceramic is lead zirconate titanate, PZT, with near-MPB composition with a multi-phase structure that appears predominantly tetragonal from X-ray diffraction.MA samples were ground down to a size of 1.8 × 3.4 × 27 mm3 by the supplier.Copper wires were soldered on each termination side and a silicon-based passivation layer was applied over the whole MA to avoid electric arcing on the ground surface during testing.The samples were supplied in the unpoled state.As-received MA samples were mounted into a uniaxial screw-driven testing machine, which was placed inside the experimental hutch of the 11-ID-C beamline at the Advanced Photon Source.Samples were placed between two steel loading punches, but ceramic discs with a thickness of 5 mm each were placed between each side of the MA and the punches to provide electric isolation and stiff contact points.Samples were aligned in a way that the X-ray beam could pass at the center of the sample.The testing machine was closed-loop controlled with a sampling rate of 1 kHz.The force applied to the sample was measured using an S-shaped 5 kN load cell.Strain was measured by three independent strain gauges, each with a maximum amplitude of ± 2.5 mm, which were fixed on the lower punch at angles of 120° apart and measured the relative displacement of the upper punch with respect to the lower one.The data measured by each strain gauge was collected separately and averaged afterwards.The total voltage was applied with a Keithley 2450 Source Meter Unit.Charge flow at the MA was measured using a Sawyer-Tower circuit by connecting the sample in series to a reference capacitor with a total capacitance of 1430 μF.Voltage was measured at the reference capacitor using a Keithley 6514 Electrometer.Each measurement was performed on different MA samples, which were always loaded in their as-received, unpoled state.Before applying any mechanical load, a virgin reference XRD-pattern was acquired on each sample.Subsequently, mechanical load was applied and then three voltage ramps were performed while keeping the mechanical load constant.The voltage was incremented by 10 V steps with a rate of 10 V/s.Each voltage value was held for the time necessary to acquire one XRD pattern.Diffraction patterns were measured in situ during simultaneous application of electric fields and mechanical load using high-energy X-rays at beamline 11-ID-C of the Advanced Photon Source at Argonne National Laboratory.Each individual sample was mounted in the screw-driven testing fixture described above.Electrode connections were attached to the high voltage source via alligator clips.Diffraction data were measured in transmission mode using a wavelength of 0.11798 Å, and 0.5 mm × 0.5 mm slit size.A cerium dioxide standard was used to calibrate sample-to-detector distance, beam center, and detector orthogonality for data reduction from two-dimensional XRD patterns using the Fit2D program .Samples were cycled 3 times using a unipolar triangular waveform and an electric field amplitude of 3 kV/mm.A script was written to effectively synchronize the diffracted X-ray beam with applied voltage and macroscopic measurements.A schematic of the experimental setup used to simultaneously measure the macroscopic properties and structural response to applied pre-stress and electric field is shown in Fig. 1.A full wired diagram is available online in the supporting information.The Fit2D program allows for reduction of 2D images into one-dimensional patterns of intensity versus 2θ .Measured 2D diffractions patterns were integrated every 15° in the azimuthal direction from the 0° direction, which is centered at the vertical section of the image, to 90° direction.An illustration of this process, Fig. S.2, is available online as supporting information.A total of 7 integrated diffraction patterns resulted from this method, and each azimuthal sector represents scattering from scattering vectors that are approximately equal to the angle that the scattering vector lies relative to the electric field direction.Azimuthal dependent data provide a full representation of the materials texture, which is a critical component when evaluating ferroelectric materials under both electrical and mechanical loads.Each azimuthal sector provides information about planes that are orientated with their plane normal equal to the angle of the azimuthal sector.For example, when examining the 111 reflection for the azimuthal sector of 45°, the diffraction signal that is measured in this sector is attributed to 111 lattice planes that are oriented 45° away from the sample normal direction.The sample normal direction in this experiment is denoted as the 0° azimuthal sector.A schematic of this representation is shown in Fig. 1.Data analysis was conducted using the Material Analysis Using Diffraction program which allows for refinement of the crystal structure and texture of the material .From data reduction, all integrated azimuthal sectors were used in the data refinement process in MAUD.MA samples were mixed phase, and underwent substantial changes in peak intensity with applied electric and mechanical field.Therefore, the initial state of the sample was refined initially to obtain reasonable atomic positions and thermal parameters.For the refinement analysis, a two-phase model of tetragonal and rhombohedral phases was used for the PZT.During the refinement, a spherical harmonic texture model and a weight strain orientation distribution function was added and refined to account for domain texture and anisotropic macrostrain .The effect of pre-stress on the non-poled MA samples was studied at eight different, constant pre-stress values.A portion of the diffraction data from the 0° sector, which represents XRD signals from scattering vectors that are aligned parallel to the load direction, immediately after pre-stress application, is shown in Fig. 2.1,The XRD data suggest that the PZT in the MA is a mixed phase system with both tetragonal and rhombohedral phases.There is a decrease in intensity of the 001T peak as a function of pre-stress amplitude.Different amplitudes of pre-stress may induce domain texture in the sample; this would be observed in the diffraction data as changes in intensity of the 001T and 100T reflections.Fig. 2 shows these types of characteristic changes, indicating domain texture is induced in the MA with increasing pre-stress.These observations are consistent with previous studies done on bulk PZT ferroelectrics .Pre-stressed samples were electrically loaded by applying 3 cycles of a unipolar triangular waveform with an amplitude of 3 kV/mm.The results under the lowest pre-stress amplitude of 2 MPa and at maximum electric field are first reported and shown in Fig. 4.The tetragonal peaks in Fig. 4 exhibit large intensity changes as a function of direction to the electric field.In ferroelectrics, domain reorientation causes a preferred distribution of intensities in the diffraction pattern as a function of angle.These large intensity changes for the tetragonal phase are quite noticeable when observing the 002T and 200T reflections in the 0° and 90° sectors in Fig. 4, which describes the non-180° domain reorientation that occurs within the tetragonal phase.Applying an electric field to a tetragonal ferroelectric causes an increase in volume fraction of 001T-oriented domains parallel to the electric field direction.This is observed in the diffraction pattern as an increase in the 002T reflection in the 0° sector compared to the unpoled .These results suggest that a substantial amount of non-180° domain reorientation occurs in the tetragonal phase when initially poling the MA.The preferred direction of alignment of long-axis domains with applied compressive pre-stress versus electric field are opposite to one another.From Fig. 4, applying an electric field leads to an increase in the 002 reflection intensity in the parallel-to-electric field direction.From Fig. 2 increasing the applied compressive pre-stress leads to a decrease of the 001 reflection intensity in the parallel-to-stress direction.Consequently, applying a compressive pre-stress orients long-axis domains perpendicular to the stress direction while applying an electric field orients these domains parallel to the electric field direction.The electric-field-dependent evolution of the of 001T/100T and 100R reflections that have their scattering vectors parallel to the electric field direction and at 2 MPa pre-stress is shown in Fig. 5, for the first part of the electric field cycle.The initial pattern in Fig. 5 is the same pattern as that reported in Fig. 2 for 2 MPa pre-stress, and for the unpoled pattern in Fig. 4.This pattern evidences that the tetragonal phase is dominant at the start of the experiment.Upon application of the electric field, the pattern exhibits the most striking changes in intensity near the coercive field, ∼0.5 kV/mm.The changes in intensities could be interpreted as a tetragonal-to-rhombohedral electric-field-induced phase transition.Similar observations have been seen recently by other researchers in bulk MPB PZT .Rietveld refinement of the diffraction patterns was used to quantify phase fractions and how they change with electric field amplitude.Fig. 6 shows the tetragonal and rhombohedral phase fractions for the 2 MPa pre-stressed sample.Prior to electric field application, a 25/75 phase ratio is present in the sample.With increasing field amplitude, the rhombohedral phase content increases at the expense of the tetragonal phase.At the maximum electric field of 3 kV/mm, the R/T phase fraction ratio reaches a maximum value of ∼60/40.The second and third electric field cycles show consistent trends with the first cycle, which show that at 3 kV/mm, the R/T ratio is consistently ∼60/40.The shaded region in Fig. 6 indicates the error bars for each reported value.The error bars of each value was taken from the MAUD results and multiplied by 2 based on the variation seen when conducting refinements with different starting parameters.This shows that, although least squares minimization is a powerful technique, it can misrepresent the true and complete experimental uncertainties .Fig. 6 presents the changes in phase fraction during electric field cycling for three representative pre-stress values, 2 MPa, 70 MPa, and 300 MPa.From Fig. 6 there is a distinct increase in phase difference at 70 MPa.This increase in phase difference with applied pre-stress is similar to the observed increases in non-180° domain reorientation reported in similar studies done on bulk MPB PZT .At higher pre-stresses, i.e. ≥ 100 MPa, the mechanical energy is high enough to significantly suppress electric-field-induced phase changes, therefore, the R/T ratio remains close to the initial value of ∼30/70 despite electric field application.The competition between electrical and mechanical energy can be seen when observing the rhombohedral phase fraction at each maximum electric field.At pre-stresses ≥ 70 MPa, electric field cycling leads to a slight decrease of the induced rhombohedral phase.The change in the achieved R/T ratios, i.e. at each 3 kV/mm step, with increasing cycle number suggests that the mechanical loading can lead to a fatigue in the material properties.The changes in phase fraction for all pre-stress values are shown in the supporting information available online.An electric-field-induced phase transition is also occurring in parallel when an electric field is applied to the MA."The phase transition alters the phase fractions, which affects the overall contribution of non-180° domain reorientation from each phase to the MA's electromechanical response.Increasing pre-stress reduces the phase fraction of the rhombohedral phase, and a reduction in the rhombohedral phase fraction corresponds to an increase in the tetragonal phase.In a hypothetical case, if the rhombohedral phase fraction was suppressed to a value of 5% then the non-180° domain reorientation from the rhombohedral phase would have a smaller effect on the electromechanical properties than the tetragonal phase which would have a phase fraction of 95%.As can be seen from Fig. 7, there is a decrease in the maximum achievable rhombohedral phase fraction with increasing pre-stress, which leads to a decrease in the contribution of non-180° domain reorientation from the rhombohedral phase.Thus, the contribution from non-180° domain reorientation in the tetragonal phase increases with applied pre-stress.One of the novel findings of this work is the increase in Δphase with increasing pre-stress, for pre-stresses ≤ 70 MPa, and applied electric field.At the highest pre-stress, the tetragonal phase becomes the dominant phase, as shown in Figs. 2 and 6, while at the lowest pre-stress and high electric fields, the rhombohedral phase dominates.When both electrical and mechanical energies compete during applied pre-stress and electric field, the highest Δphase is seen at 70 MPa.This indicates that this is the pre-stress value where the mechanical energy can efficiently reverse the changes of the electric-field-induced phase transition and the electric field has enough energy to induce the phase transition again.As evident from Figs. 6–8 and the related discussion, applying a pre-stress above 70 MPa results in both a suppression of the rhombohedral phase and of non-180° domain reorientation within both the tetragonal and rhombohedral phase.Consequently, the d33* value displayed in Fig. 8 drops when the pre-stress is increased above 70 MPa.These results demonstrate that the maxima obtained in the change in domain texture and phase difference are dependent on how much electrical energy is supplied to the system and how it interacts with the mechanical energy.The increase of Δphase, observed in Fig. 8, with pre-stresses ≤ 70 MPa can be explained by two possible mechanisms.The first mechanism is strain compensation due to material deformation.MPB PZT compositions are known to be elastically soft compared to end-member compositions .The electromechanical response of polycrystalline PZT is influenced by intergranular strains, which is known to inhibit the material response due to adjacent grains mechanically constraining one another.The result of a grain undergoing deformation due to an applied electric field can cause an adjacent grain to compensate the response by either elastically deforming, undergoing domain reorientation, or experiencing a phase transition.The second mechanism is named domain variant selection and describes how the resulting domain states lead to an ideal phase fraction ratio that is dependent on the direction and magnitude of the applied stimulus.A model that simulates phase fractions and domain textures in a tetragonal and rhombohedral mixed-phase ferroelectric/ferroelastic material is used to further explore possible phase fraction changes in response to electric field.The model is built upon earlier models for single-phase materials presented in Refs. .Polarization rotation is a mechanism that can well explain the electric-field-induced phase transitions in ferroelectric single crystal studies.The seminal work conducted by Fu and Cohen , Bellaiche et al. , Davis et al. , and Damjanovic have provided insight into both polarization rotation and extension.Extension of these concepts to polycrystalline ceramics should be undertaken with caution since the measured X-ray signal represents an average of the grain orientations present within the sample.It may be tempting to ascribe changes in phase fraction of a mixed phase system with applied electric field to polarization rotation.However, adopting the definition of Fu and Cohen as well as Damjanovic, the present results cannot be readily ascribed to polarization rotation.The current XRD results do not provide evidence of a phase transition sequence involving a monoclinic phase as proposed by Bellaiche et al. nor do the strain and polarization measurements resemble the behavior seen in the single crystal studies reported in Refs. .Higher resolution XRD measurements would be needed to identify a phase transition sequence of T-MC-R with application of electric field.With the current data, the results are adequately explained with a two-phase model.The results so far demonstrate that the origin for enhanced d33∗ with increasing pre-stress is a result of an enhancement in non-180° domain reorientation and electric-field-induced phase transitions.In polycrystalline tetragonal and rhombohedral PZT, non-180° domain reorientation is accompanied by strain that is dependent on the extent of domain reorientation and the spontaneous strain of the phase.This strain mechanism has been largely explored, but strain generated from electric-field-induced phase transitions has not been as prevalently discussed as strain generated from non-180° domain reorientation.For electric-field-induced phase transitions, two mechanisms are proposed to explain its contribution to strain: an increase in the rhombohedral phase which has a larger lattice strain contribution than the tetragonal phase, leading to higher d33 response , and the induced volumetric and shear strain as a result of a phase transition.In the latter case, two components can describe material deformation: the volumetric strain and shear strain.Shear strain relates to the change in angle between two different sample directions as the material distorts with applied stimulus, while volumetric strain describes the changes in volume with applied stimulus.The spontaneous strain of a phase is a measurement of shear strain in the unit cell and is independent of the volumetric strain.The increase in ΔP with pre-stress correlates with the behavior of the phase difference and changes in domain texture.It is well-known that polycrystalline rhombohedral compositions are known for having higher measurable polarization values than tetragonal compositions as well as more domain wall motion .Even theoretical models that account for saturated domain states in polycrystalline samples show that rhombohedral compositions exhibit a higher maximum achievable polarization and strain value than tetragonal compositions ."Therefore, it's expected that a higher rhombohedral phase fraction leads to a higher measured polarization.Recall that when a phase transition occurs it alters the amount of non-180° domain reorientation contribution coming from each respective phase.Upon electric field application, the rhombohedral phase becomes dominant which exhibits a higher polarization response and enhanced non-180° domain reorientation.This enhances the MAs polarization response at maximum electric field relative to the remanent field when the tetragonal phase is dominant.Thus, when both the electrical and mechanical energies compete, the larger the phase difference and domain reorientation change, the higher the ΔP.This explains the observed peak at 70 MPa pre-stress and the subsequent decrease in ΔP at higher pre-stresses.The polarization and strain measurements for all other applied pre-stresses are available online as supporting information.The results from this work demonstrate that electric-field-induced phase transitions and non-180° domain reorientation both contribute to the increased electromechanical response of the MAs at pre-stress levels ≤70 MPa.Although MA manufacturers apply a pre-stress to avoid cracking of the MA during poling rather than to increase the electric field-induced strain, they benefit from this “sweet spot” in the electromechanical performances.Importantly, it is necessary to identify the stress at which the properties drop for each new generation of actuators.The present work provides a structural explanation for the origin of enhanced electromechanical response with applied compressive pre-stress and unipolar cycling.Hence, this study can aid in designing new generations of MAs with enhanced properties achieved through phase tuning at the MPB.Synchronized X-ray diffraction and macroscopic property measurements were used to study the effect of electrical and mechanical loading in situ on commercial PZT-based MAs.Rietveld refinement was implemented on the measured diffraction data by using a two-phase model of tetragonal, P4mm, and rhombohedral, R3m.Phase fraction versus electric field for various pre-stresses were reported.Refinement results suggest that a combination of electric-field-induced phase transitions and non-180° domain reorientation is responsible for the enhanced response of MAs during applied electric field and pre-stresses lower than 70 MPa.This study marks one of the first evidence of an increase in the available volume fraction of non-180° domain reorientation due to an applied pre-stress, which has been previously inferred on bulk MPB PZT and MAs.Application of an electric field promotes a phase transition from the tetragonal to the rhombohedral phase while a mechanical stress on previously poled samples favors the tetragonal phase.This leads to a maximum in the phase difference and non-180° domain reorientation at a 70 MPa pre-stress.Macroscopic strain and polarization values have a maximum between 50 and 70 MPa, which correlates with the increased phase difference and non-180° domain reorientation.Electric-field-induced phase transitions contribute to both the macroscopic strain and polarization because of the rhombohedral phase having a larger lattice strain contribution and polarization response than the tetragonal phase, and due to the induced volumetric and shear strain that occurs as a result of a phase transition.At high pre-stress values, ≥100 MPa, electric-field-induced phase transitions and domain reorientation are significantly reduced, which explains the reduction in electromechanical properties.This effect is attributed to the large energy barrier that is imposed by the pre-stress on the MA.The present XRD results coupled with macroscopic measurements suggest that electric-field-induced phase transitions play a substantial role in the enhanced electro-mechanical response than previously suggested.
The effects of electrical and mechanical loading on the behavior of domains and phases in Multilayer Piezoelectric Actuators (MAs) is studied using in situ high-energy X-ray diffraction (XRD) and macroscopic property measurements. Rietveld refinement is carried out on measured diffraction patterns using a two-phase tetragonal (P4mm) and rhombohedral (R3m) model. Applying an electric field promotes the rhombohedral phase, while increasing compressive uniaxial pre-stress prior to electric field application favors the tetragonal phase. The competition between electrical and mechanical energy leads to a maximal difference between electric-field-induced phase fractions at 70 MPa pre-stress. Additionally, the available volume fraction of non-180° domain reorientation that can be accessed during electric field application increases with compressive pre-stress up to 70 MPa. The origin for enhanced strain and polarization with applied pre-stress is attributed to a combination of enhanced non-180° domain reorientation and electric-field-induced phase transitions. The suppression of both the electric-field-induced phase transitions and domain reorientation at high pre-stresses (>70 MPa) is attributed to a large mechanical energy barrier, and alludes to the competition of the electrical and mechanical energy within the MA during applied stimuli.
444
Off the wall: The rhyme and reason of Neurospora crassa hyphal morphogenesis
The filamentous fungus Neurospora crassa has been used for decades as a model system to investigate the genetic basis of phenotypical traits, metabolic pathways, circadian rhythms, and gene silencing, among other biological processes.Most recently, N. crassa has become also an important model microorganism to investigate cellular processes such as morphogenesis, cell polarization and cell-cell fusion.As in other filamentous fungi, N. crassa hyphae have a cell wall that allows them to deal and interact with the surrounding environment, and that determines their own growth and shape.While the composition of Neurospora’s cell wall is well known, complete understanding of the underlying molecular and cellular mechanisms that contribute to its synthesis, assembly and remodeling is lacking.The extraordinary advancement of live cell imaging technologies together with the tractable genetic manipulation of Neurospora’s cells, have allowed the study of the fate and mode of operation of the cell wall synthesis machinery, including organelles, associated cytoskeleton and regulatory components involved in their secretion.This review focuses on the subcellular mechanisms that lead to cell wall assembly and remodeling with a special emphasis in apical processes.First, we summarize the current knowledge on Neurospora cell wall composition and structure and discuss the proposed models for cell wall synthesis.Next, the review concentrates on the cellular processes involved in the intracellular trafficking and sorting of Neurospora’s cell wall building nanomachinery.A special emphasis is given to the composition and function of the Spitzenkörper, the apical body that serves as the main choreographer of tip growth and hyphal morphogenesis.The cell wall of N. crassa hyphae is a composite of superimposed layers.The innermost layer, the closest to the plasma membrane, is an alkali-insoluble fibrillar skeleton containing primarily glucan and a small percentage of chitin, whereas the outermost layer is an alkali-soluble amorphous cement containing cell wall proteins covalently bound to the β-1,3-glucan either via the remnants of a GPI moiety or directly through an α-1,6 bond to the core of N-linked gel type polysaccharides.Conidial cell walls contain also α-1,3-glucan, a component that is not detected in cell walls of vegetative hyphae.In other fungi, α-1,3-glucan is often found agglutinating chitin and β-1,3-glucan to protect their exposure to the external milieu.Besides those main components, small amounts of glucuronic acid have been also detected in N. crassa cell walls; nevertheless, the exact nature and function of the glucuronic acid remains unknown.During the 1960s, the cell wall chemistry was considered a characteristic to systematically classify fungi since some monosaccharides were consistently present in the cell walls of defined fungal taxa: d-galactose and d-galactosamine, l-fucose, d-glucosamine, and xylose.Earlier inferences made by cell wall biochemists have been confirmed by molecular phylogenetics and phylogenomics studies.Recently, a high proportion of β-1,3 linked fucose-containing polysaccharides was found in two Mucoromycota species, Phycomyces blakesleeanus and Rhizopus oryzae.Corresponding genes involved in fucose metabolism were found in this early divergent phylum, while they were absent in N. crassa.Moreover, it was found that both Mucoromycota species only harbor four of the seven CHSs classes typically observed in Dikarya, and a new class that seems exclusive of the Mucoromycota.Nevertheless, contemporary biological systematics has ruled out cell wall composition as synapomorphy, as well as lysine biosynthesis and the presence of ergosterol in cell membranes, since such traits are neither exclusive nor always present in all members of the Kingdom Fungi.The CWPs include a combination of glycosyl hydrolases such as chitinases, chitosanases, β-1,3-endoglucanases, β-1,6-endoglucanases, exoglucanases, mixed linked glucanases, and β-1,3-glucanosyltransferases.All these CWPs could have a role in cell wall remodeling, presumably a necessary process for hyphal growth and branching to ensue.As for other identified CWPs, very few of them have known associated functions.HAM-7 is a cellular receptor that acts during cell anastomosis.ACW-2, ACW-3, and ACW-7 are cell wall GPI-modified CWPs containing a Kre9 domain important for β-glucan assembly.ACW-5 and ACW-6 contain CFEM domains rich in cysteines found in proteins involved in fungal pathogenesis.The construction of the cell wall is the result of a series of finely orchestrated events.Yet, an understanding of the mechanisms that take place during tip elongation, branching and spore formation is still limited.In vegetative hyphae, the synthesis and early assembly of the cell wall components occur at the tips as a consequence of a highly polarized secretory process.The main structural polysaccharides are synthesized at secretion sites.In contrast, CWP and polysaccharides of the amorphous layer are pre-synthesized intracellularly, presumably through the ER-to-Golgi secretory pathway, and incorporated into the cell wall.Based on observations in a variety of fungi including yeasts, at least two models for cell wall synthesis have been proposed.The unitary model of cell wall growth, the purpose of which was to explain hyphal shape generation, proposed that the cell wall construction during apical extension requires a delicate balance between secreted synthesizing enzymes and lytic enzymes.This model emerged upon observations in Mucor rouxii cells after chitin synthesis inhibition with polyoxin D, which resulted in impaired growth and spore germination, followed by cell tip bursting, right at the sites where chitin synthesis takes place.These results supported the hypothesis that hyphal growing tips have cell wall lytic potential that must be gradually released and delicately coordinated with polysaccharide synthesis, which can be easily disturbed by external stimuli.A few years later, based on cell wall fractionations of pulse-chase labeled Schizophyllum commune, the steady-state model tried to explain the differential cell wall composition between the apex and the subapex.This other model suggested that cell wall material is assembled into the apex as non-fibrillar chains that become gradually crosslinked by glucanosyltransferases, leading to fibril crystallization at the subapex, which contributes to the hardening of the cell wall.Hence, the steady-state model does not call for the need of plasticizing pre-existing assembled material, although it considers the action of enzymes at the subapex that transfer β-1–3 glucans to chitin, which rigidify the cell wall during hyphal morphogenesis.The steady-state model favors the mechanism for hyphal extension proposed by Robertson in 1959, which involved insertion of new wall material at the apex, and rigidification of the newly formed wall at the base of the extension zone.An integrated interpretation of the aforementioned cell wall growth models considers the simultaneous controlled action, in space and time, of biosynthetic enzymes, hydrolytic-loosening enzymes and rigidifying enzymes.The proposed integrated model hypothesizes that once synthesized, chitin and β-1,3-glucans chains are exposed to hydrolysis by chitinases and β-1,3-glucanases, respectively.The hydrolyzed polysaccharides display new free termini that serve as substrate for cross-linking enzymes that would interconnect amenable residues to harden the cell wall, promoting its maturation.The presence of enzymes able to break down pre-existing polysaccharides at the extension zone suggested the malleability of the tip cell wall, which ultimately would allow the insertion of nascent cell wall polysaccharides.Wall deformability would also be necessary in subapical areas of branch emergence.Moreover, in regions behind the extension zone, a maturation process takes place where further material, both fibrillar and amorphous, is added or existing material is cross-linked, generating a thicker “non extensible” wall.Interestingly, none of the models proposed how the amorphous material integrates into the cell wall or its role in shape generation and wall maturation.Knowledge of how cell wall synthesizing and remodeling enzymes accomplish their functions is limited.The following section summarizes the available information regarding cell wall synthesis enzymes in N. crassa.β-1,3-glucan polymers are synthesized in N. crassa by the glucan synthase complex, constituted by a catalytic subunit, FKS-1, and at least one regulatory subunit, RHO-1.GS-1 is another protein important for β-1,3-glucan synthesis since suppression of gs-1 impairs β-1,3-glucan synthase activity and cell wall formation.Although its precise role remains unknown, biochemical evidence showed co-sedimentation of GS-1 with cell fractions containing β-1,3-glucan synthase activity, which suggested that it constitutes part of the GSC.Fks1 is present in Ascomycota, Basidiomycota, Blastocladiomycota, Chytridiomycota, Cryptomycota, Glomeromycota and some Incertae sedis species."Similar to Aspergillus fumigatus, N. crassa contains a single copy of Fks1, which is required for cell viability and cell wall integrity.N. crassa FKS-1 protein structure comprises a conserved large hydrophilic central domain flanked by six and eight transmembrane domains at its N- and C- terminus, respectively.To determine the subcellular location and dynamics of N. crassa FKS-1, the GFP encoding gene was inserted in frame ∼200 amino acids upstream of the first transmembrane domain, close to the N-terminus of the protein.This strategy was followed since our previous attempts to C- and N-terminal tagging had proven unsuccessful.Live cell imaging of N. crassa GFPi-FKS-1 revealed its sub-cellular location at growing hyphal tips, specifically at the macrovesicular region of the SPK, similarly to the spatial pattern observed for GS-1.FKS-1 was not only confined to the SPK, it was also detected at the foremost apical regions of the hyphal PM and slightly merging into the proximal limit of the subapical endocytic ring.This distribution led us to speculate that this region is where the macrovesicles containing FKS-1 might be discharged from the SPK and where FKS-1 is synthetically active before being inactivated.Although the precise mechanism of the synthetic activity of FKS-1 at the hyphal tip remains elusive, the apical localization of LRG-1 and RGF-1 were also indicative of the hyphal region where FKS-1 is presumably active.In Schizosaccharomyces pombe, a fungus with no chitin in the cell wall, the FKS-1 orthologs Bgs1p and Bgs4p have been detected at septum formation sites.By contrast, in N. crassa neither FKS-1 nor GS-1 have been detected at septa, in agreement with earlier ultrastructural and biochemical studies showing chitin as the major polysaccharide of septa in N. crassa.Rho1 is a well-conserved protein across all fungal taxa that belongs to the family of Rho GTPases, molecular switches that through signal transduction pathways are involved in the regulation of different cellular processes, including cell wall integrity maintenance.Both in A. fumigatus and in S. cerevisiae, Rho1 was identified as a component of the GSC.In S. cerevisiae, Rho1p regulates the synthesis of β-1,3-glucan in a GTP dependent manner and it seems that in N. crassa RHO-1 has a similar regulatory role in β-1,3-glucan synthesis.Phenotypic analysis of N. crassa rho-1 mutants revealed not only that RHO-1 is essential for cell viability but also that it is crucial for cell polarization and maintenance of CWI through the MAK-1 MAP kinase pathway via its direct interaction with Protein Kinase C 1.Although the interaction between FKS-1 and RHO-1 has not been confirmed in N. crassa, fluorescently tagged RHO-1 was also detected at the SPK.RGF-1, a RHO-1 guanine nucleotide exchange factor, displayed a similar spatial distribution than FKS-1.Whether RHO-1 and its regulator RGF-1 interact with FKS-1 or are required for its localization/activation needs to be determined.An appealing hypothesis is that RHO-1 might be strategically positioned at the SPK acting as a sensor ready to relay the signals to its CWI effector PKC-1 under cell wall stress conditions, as it has been observed in S. cerevisiae.GS-1 has orthologs only in the Ascomycota and the Basidiomycota, confirming that fungi from these phyla evolved specific cell wall synthesis machinery.The best-characterized N. crassa GS-1 ortholog is S. cerevisiae Knr4p/Smi1p.Both GS-1 and Knr4p/Smi1p share a well-structured globular core flanked by two N- and C-terminal intrinsically disordered arms with a high capacity to form protein-protein complexes.The N-terminal arm is essential for Knr4p/Smi1p cellular localization and interaction with partner proteins.The unstructured arms of N. crassa GS-1 could also play similar roles, since the N-terminally mCherryFP-tagged version of GS-1 accumulated in the cytosol and the C-terminally GFP-labeled version, localized to the outer SPK layer, impaired the hyphal growth rate.Knr4p/Smi1p interacts with proteins involved in CWI, bud emergence, and cell polarity establishment.Although Knr4p/Smi1p does not interact with Fks1p, both of them are part of the PKC1-SLT2 signaling cascade, where Knr4p/Smi1p physically interacts and coordinates the signaling activity of Slt2p.Chitin is a β-1,4-linked homopolymer of N-acetyl glucosamine residues synthesized by chitin synthases, a family of enzymes that catalyze the transfer of GlcNAc from UDP-GlcNAc to the reducing end of a growing chitin chain.CHSs are polytopic proteins containing multiple transmembrane spanning domains and three conserved sequence motifs QXXEY, EDRXL, and QXRRW.Additionally, they contain a conserved catalytic domain, PF03142.Up to 15 CHS-encoding genes have been found in the genomes of Dikarya fungi, although only one to seven CHS-encoding genes are usually present in the genomes of Ascomycota.More expanded CHSs families are found in Mucoromycota, Chytridiomycota and Blastocladiomycota fungi.The N. crassa genome contains seven CHS encoding genes: chs-1, chs-2, chs-3, chs-4, chs-5, chs-6, and chs-7.According to their amino acidic sequences, CHSs are grouped into seven classes, and in three divisions.N. crassa Division 1 CHS comprises CHS-1, -2 and -3 belonging to classes III, II and I, respectively.In addition to the conserved domain PF03142, CHSs belonging to division 1 contain also a PF08407 and PF01644 domains.Division 2 contains CHSs of classes IV, V and VII, all of them characterized by a cytochrome b5-binding type domain.A DEK domain is found at the C-terminus of CHS-5 and CHS-7, which present also a myosin-like motor domain at their N-terminus.In CHS-5 and CHS-7, the MMD domain has an ATPase activity domain that belongs to the larger group of P-loop NTPases.In CHS-5, the ATPase activity domain bears a putative phosphorylation site, a purine-binding loop, switch I and switch II regions, a P-loop and a SH1 helix.In CHS-7 the ATPase activity domain is shorter, and lacks the sites described for CHS-5.In addition, they both lack the IQ motif, characteristic of MYO-5, involved in binding calmodulin-like light chains.In classes V and VII CHSs of A. nidulans and U. maydis, the N-terminal MMD has a 20% identity to the MMD of class V myosin, while N. crassa CHS-5 and CHS-7 show 25% and 20% identity with MYO-5, respectively.CHS-6, belonging to class VI, does not group with any other CHS and it is the only CHS with an N-terminal signal peptide.All seven N. crassa CHSs localize at the core of the SPK and developing septa, although with subtle distribution differences.These observations, in combination with immunoprecipitation assays followed by mass spectrometry analyses, suggested that at least CHS-1, CHS-4 and CHS-5 are transported in distinct chitosome populations.As mentioned above, chitin is synthesized in situ at sites of secretion and CHSs are therefore delivered to those sites in an inactive form.Some CHSs are zymogens whose activation requires a proteolytic processing.To date, specific proteases involved in CHSs activation have not been identified.However, immunoprecipitation/MS assays of N. crassa CHS-1, CHS-4 and CHS-5 have revealed putative proteases, which could participate in their regulation.The fungal cell wall must serve as armor while still pliable.It is hypothesized that some cell wall resident GH facilitate the breakage between and within polysaccharides allowing the cell wall remodeling during morphogenetic changes and developmental transitions.Chitinases, on the one hand, are hydrolytic enzymes that efficiently cleave the β-1,4 linkage of chitin, releasing oligomeric and dimeric products.Fungal chitinases are classified within the GH-18 family.Chitobiose residues can be further converted to monomeric residues by β-N-acetylglucosaminidases, which in fungi have only been described within the GH-20 family.Chitinases can be further classified as endo-hydrolases that cleave chitin at random positions, or exo-hydrolases that release chitobiose from either end of the polymer.A number of enzymes from the GH-18 family contain a secretory signal peptide for translocation across the ER membrane and entry into the secretory pathway, and a GPI anchor signal, which might determine their residency at the PM and/or cell wall, as well as N- or O-linked glycosylation sites for oligosaccharide modifications.This is why GH-18 glycoside hydrolases are considered potential cell wall modifying enzymes.Filamentous ascomycetes have generally between 10 and 30 GH-18 chitinase genes.The N. crassa genome includes 12 genes that encode putative chitinases belonging to the GH-18 family.A functional analysis of chitinases from GH-18 and GH-20 families in N. crassa revealed that 10 of these genes are non-essential.However, deletion of the gene chit-1 resulted in reduced growth rate compared to the wild type.Despite this evidence is not enough to claim involvement of CHIT-1 in cell wall remodeling, its N-terminal signal peptide and a predicted C-terminus GPI anchor motif, suggest its potential cell wall localization and a possible role in modifying resident chitin.In addition, N. crassa CHIT-1 is 39% identical to the S. cerevisiae Cts1, an endochitinase involved in mother-daughter cell separation.N. crassa CHIT-1 shows 36% identity with ChiA from Aspergillus nidulans, a protein localized at conidial germinating tubes, at hyphal branching sites and hyphal tips.Despite their extensive presence in fungal genomes, very little is known about the function of chitinases during polarized tip growth.While it has been suggested they could have a role in plasticizing the cell wall, to date studies are either inconclusive or demonstrate that they have no role in fungal morphogenesis.Glucanases, on the other hand, catalyze the breakage of the α or β glycosidic bond between two glucose subunits.They can be classified as endo- or exo-depending on the site where they cut along the chain.The N. crassa genome contains at least 38 proteins annotated as glucan-modifying enzymes distributed among different GH families.The protein encoded by NCU06010 corresponds to a mutanase or α-1,3-glucanase from the GH-17 family, which has been found only to be expressed throughout the conidiation process.This is consistent with the presence of α-1,3-glucans exclusively in the cell wall of asexual spores; however, no phenotypical defect has been observed in deletion mutants.Interestingly, deletion of the orthologous protein Agn1 in S. pombe leads to clumped cells that remained attached to each other by septum-edging material, which in S. pombe is known to be a combination of α and β-1,3-glucans.The annotated gene NCU07076 encodes a putative β-1,3-glucanase classified into the GH-81 family.There have been no specific studies on this protein in N. crassa; however, their homologs have been related to cell wall remodeling during budding and cell separation in S. cerevisiae and C. albicans, respectively, and endolysis of the cell wall during sporulation in S. pombe.Another example is the NCU03914 translation product, that corresponds to a non-characterized β-1,3-exoglucanase belonging to the GH-5 family.S. pombe Exg1p, Exg2p and Exg3p, orthologs of NCU03914, are secreted to the periplasmic space, GPI-PM bound, or remain cytoplasmic, respectively.Interestingly, overexpression of Exg2p resulted in increased accumulation of α and β-1,3-glucans at the cell poles and septum, but deletion of exg genes seemed dispensable during these events.Recently, the putative N. crassa GPI-modified β-1,3-endoglucanases BGT-1 and BGT-2 were tagged with GFP and imaged in live hyphae.Both BGT-1 and BGT-2 were found to accumulate at the hyphal apical PM immediately behind the apical pole.Furthermore, both enzymes concentrated at polarization sites that seemingly involve cell wall growth and remodeling, such as septum development, branching, cell fusion and conidiation.BGT-1 and BGT-2 belong to the GH-17 family.Strains in which bgt-1 or bgt-2 were deleted displayed only a very slight reduction in growth rate; however, single bgt-2 and double bgt-1::bgt-2 deletion mutants exhibited an increased resistance to the cell wall stressors Calcofluor White and Congo Red, indicating an altered cell wall architecture.In addition, these mutants displayed conidiation defects, suggesting a role for BGT-1 and BGT-2 on the re-arrangement of glucans at the conidiophore cell wall to allow conidial separation.The N. crassa cell wall is a mixture of interconnected and branched β-1,3-glucans, chitin and proteins.Glucanosyltransferases are responsible for this activity.Many β-1,3-glucanases are able to hydrolyze and further transfer the cleaved residues to a new polysaccharide chain, thus behaving as glucanosyltransferases as well.Homologous proteins of N. crassa BGT-1 and BGT-2 have been previously reported as β-1,3-endoglucanases with glucanosyltransferase activity.BGT-1 and BGT-2 show 64% and 47% identity with A. fumigatus Bgt2, a PM-GPI bound branching enzyme that hydrolyzes β-1,3-glucan and transfers the residues to a different chain of β-1,3-glucan via a β-1,6-linkage.Even when further biochemical evidence is required, the considerable identity with A. fumigatus proteins suggests a similar role for BGTs in N. crassa.CWPs with a role in β-1,3-glucan remodeling belonging to the GH-72 family or GEL, are crosslinking enzymes with predicted GPI signals that have shown an active role in cell wall organization.GEL family members cleave an internal glycosidic linkage of the β-1,3-glucan chains and transfer the newly formed reducing end to the non-reducing end of another β-1,3-glucan molecule.This results in the elongation of the polymer creating multiple anchoring sites for mannoproteins, galactomannans, chitin, and reorganizing β-1,3-glucans in the cell wall.GEL family members have been identified in several fungal species such as S. cerevisiae, C. albicans, and A. fumigatus.N. crassa has 5 genes encoding for GEL family members: gel-1 or gas-5, gel-2, gel-3, gel-4, and gel-5.There is no evidence of the biochemical activity of these proteins in the cell wall of N. crassa; however, studies on mutant strains lacking one or several of the gel genes suggest that they play differential roles.GEL-3 is constitutively expressed and, in combination with GEL-4 and GEL-2, seems to be directly involved in vegetative growth, while in combination with GEL-1, participates actively in aerial hyphae and conidia production.GEL-1 and GEL-4 also display an active role in cell wall remodeling in response to stress conditions.While the main putative activity of the GEL family of β-1,3-glucanosyltransferases is the incorporation of newly synthetized β-1,3-glucan into the wall, it has been claimed that they are also important for glycoprotein incorporation.The CRH protein family is a second family of crosslinking enzymes with an active role during remodeling of cell wall polymers and anchored to the PM via a GPI anchor.The members of this family, Crh1p, Crh2p and Crr1p in S. cerevisiae, are homologous to bacterial β-1,3/1,4-glucanases and plant xyloglucan endotransglycosylases/hydrolases.They act at different developmental stages in yeast and are classified within the GH-16 family at the CAZy database.Crh1p and Crh2p are the transglycosidases responsible for the transfer of chitin chains to β-1–6-glucan and to β-1–3-glucan in S. cerevisiae in vivo and the crosslinks they generate are essential for the control of morphogenesis.There are 13 proteins members of the GH-16 family codified in the N. crassa genome; from them only MWG-1, GH-16-7, CRF-1, GH-16-11, and GH-16-14 share 46%, 48% and 47% identity, respectively, with S. cerevisiae Crh1p.There is no evidence that indicates the active role of these proteins in crosslinking chitin to β-1,6-glucan and to β-1,3-glucan in N. crassa; however, the Crh enzymes are exclusive to fungi and well conserved across fungal genomes.The functional redundancy they share could be essential to act only during specific developmental stages and cell wall remodeling of the fungus.In their journey to the N. crassa hyphal tip, chitosomes first accumulate at the SPK core and then presumably fuse with the cell PM to deposit CHSs at sites of cell wall expansion.These microvesicles have average diameters between 30 and 40 nm and a characteristic low buoyant density.For many years, it has been intriguing how the traffic of chitosomes toward the zones of active cell wall growth is organized and regulated in filamentous fungi.More than four decades ago, it was speculated that chitosomes could originate by self-assembly in the cytoplasm, could correspond to the intraluminal vesicles of multivesicular bodies, or could be derived from the endoplasmic reticulum.To elucidate the vesicular origin and traffic of chitosomes carrying CHSs in N. crassa, the effect of brefeldin A, an inhibitor of vesicular traffic between the ER and Golgi, was evaluated.Under the effect of the inhibitor at a concentration of 200 μg mL−1, CHS-1 continued to reach the core of the SPK, while CHS-4 did not, indicating that at least for some CHSs, such as CHS-4, transport occurs through the classical ER-to-Golgi secretory pathway.Phosphorylation plays a role in the regulation of CHSs in C. albicans, where the correct localization and function of Chs3p depends on its phosphorylated state.Similarly, the phosphorylation state of Chs2p in S. cerevisiae determines its temporal and spatial localization.In S. cerevisiae Chs2p is directly phosphorylated by the cyclin-dependent kinase Cdk1p, and retained in the ER until the cell comes out of mitosis.Afterwards, Chs2p is dephosphorylated by the phosphatase Cdc14p.In a pulldown screen in N. crassa, the phosphatase PP1 was identified as a putative CHS-4 interacting protein, and it was proposed that it could be acting as a regulatory protein with potential role in dephosphorylation of CHS-4.In S. cerevisiae, the vesicular traffic of Chs3p is well characterized.At the ER, Chs3p is palmitoylated by the action of palmitoyltransferase Pfa4p.This modification is necessary for Chs3p to achieve a competent conformation required for its ER exit.Moreover, the ER export cargo Chs7p is responsible for directing the appropriate folding of Chs3p, avoiding its aggregation in the ER.Chs3p is then transported to Golgi.From there, it can exit the Golgi through the formation of a complex called exomer, via an alternative exomer-independent pathway, or through an AP-1 dependent pathway.The exomer in S. cerevisiae consists of five proteins, which includes Chs5p and a family of four Chs5p-Arf1p binding proteins named ChAPs.At the mother- bud neck septation sites, Chs4p and Bni4p mediate Chs3p activation and retention.In N. crassa, genes encoding orthologs of all the proteins involved in the vesicular trafficking of Chs3p in S. cerevisiae have been identified.For Pfa4p two orthologs were identified: palmitoyltransferase PFA-4 and palmitoyltransferase PFA-3.For Chs7p, two orthologous proteins were identified; CSE-7, a “chitin synthase export chaperone”, and a “hypothetical protein”.For the five components of the exomer complex, only two orthologous proteins were identified; CBS-5, a “chitin 5 biosynthesis protein” ortholog for Chs5p, and BUD-7, a “Bud site selection protein” ortholog for the four ChAPs.For Chs4p three orthologs were identified: chitin synthase activator CSA-1, chitin synthase activator CSA-2, and chitin synthase regulator CSR-3.From those, CSA-1 was found as an interacting partner of CHS-4, the ortholog of S. cerevisiae Chs3p.For Bni4p an ortholog hypothetical protein was identified.Recently, in N. crassa, the role of CSE-7 on the secretory traffic of CHS-4 was evaluated.In a Δcse-7 mutant background, CHS-4 arrival to the septum and to the core of the SPK was disrupted, and it instead accumulated in an endomembranous system distributed along the cytoplasm.The complementation of the mutation with a copy of the cse-7 gene restored the location of the CHS-4 in zones of active growth, corroborating the role of CSE-7 as an ER receptor cargo for CHS-4.Fluorescently tagged CSE-7 was found in a network of elongated cisternae similar to the compartments where CHS-4-GFP was retained in the mutant background for Δcse-7.Unexpectedly, CSE-7 appeared also at septa, as well as at the core of the SPK.For several decades S. cerevisiae Chs7p was thought to be a Chs3p chaperone confined to the ER.However, recent studies have shown that Chs7p leaves the ER and apparently travels in the same vesicles that transport Chs3p to the cell surface, where it promotes Chs3p activity.These results agree with the presence of CSE-7 at the SPK in N. crassa, suggesting that CSE-7 could have an additional function at the apex.S. cerevisiae ΔCHS7 and C. albicans chs7Δ null mutants showed reduced levels of chitin content as well as decreased chitin synthase activity.In N. crassa, the Δcse-7 and Δchs-4 mutants did not show any defects in growth rate, colony aberrant phenotype, or hyphal morphology when compared to the parental and wild type strains.In contrast, Δchs-1, Δchs-3, Δchs-6, and Δchs-7 single mutants, as well as the Δchs-1; Δchs-3 double mutant showed a considerable decrease in growth rate and a disturbed phenotype.The lack of a mutant phenotype in N. crassa Δcse-7 suggested that CSE-7 does not have a role in the secretory pathway of CHS-1, -3, -6, and -7.Extensive work is needed to identify the putative proteins involved in the secretory route of other CHSs in N. crassa.SEC-14 cytosolic factor identified among the putative CHS-4 and CHS-5 interacting proteins could be potentially involved in the traffic of chitosomes.The homologs of Sec14p in S. cerevisiae is involved in regulating the transfer of phosphatidylinositol and phosphatidylcholine and protein secretion.The endomembranous system of N. crassa is a highly dynamic and complex network of elongated cisternae that extends throughout the cytoplasm from region III to distal regions of the hyphae.It comprises the so-called tubular vacuoles and the ER.In S. cerevisiae, the ER is described as a peripheral ER organized as an interconnected tubules and as perinuclear ER.In A. nidulans and Ustilago maydis the ER appears as peripheral or cortical strands and nuclear envelopes.As mentioned above, CSE-7, the putative ER receptor for CHS-4, was localized in a highly dynamic NEC in close proximity to some nuclei but not exactly circling them.Two different vacuolar markers, the VMA-1 and the dye Oregon Green 488 carboxylic acid, and two ER markers, NCA-1 and SEC-63, partially co-localized with CSE-7 at the NEC.Transmission electron microscopy identified rough ER sheets abundantly distributed in subapical regions of the hyphae; nevertheless, smooth ER could not be detected.The RER was not observed in the vicinity of the nuclear envelope.The vacuoles appeared as extensive sheet-like cisternae with electron-transparent lumen, edges coated with an electron dense fibrillar material and tube-like extensions with electron-dense lumen containing heterogeneous inclusions, these two vacuolar morphologies appeared connected.Collectively, all the evidence gathered for N. crassa suggested that the GSC might be transported towards the hyphal tip in macrovesicles, a population of carriers different than the chitosomes.Fluorescent recovery after photobleaching allowed analysis of the dynamics of the carriers containing the GFP tagged version of FKS-1.Fluorescence appeared at the immediate layers surrounding the core of the SPK and extended progressively to the most outer layers.Quantitative analysis of these FRAP experiments revealed significant differences between the half-time recovery values of FKS-1 and GS-1.These differences might indicate the type of association of these proteins with the macrovesicles.Whereas GS-1 transiently interacts with the vesicles, FKS-1 might be embedded in the vesicular membranes.These observations contrast with what has been described for U. maydis where Msc1p, a class V CHS with an N-terminal MMD, is co-delivered to the apical PM together with Gsc1p or Chs6p, suggesting one single population of secretory vesicles carrying cell wall synthesizing enzymes.Electron micrographs of U. maydis filamentous cells indicate the presence of a few, similarly sized, vesicles in the apical region, which would explain the joint delivery of Gsc1p, Chs6p and Msc1p.More recently, it has been shown that the SPK of C. albicans contains also homogenously sized secretory vesicles, suggesting that co-delivery of cell wall synthesizing enzymes could also occur in this and other fungal species.However, as cellular and biochemical evidence from N. crassa and other species indicate, co-delivery is not a general secretion mechanism of cell wall synthesizing enzymes.The differences observed in the organization of the vesicular conveyors of cell wall synthesizing enzymes in fungal species extend to their spatial organization at the apex, where they arrange in a diverse number of configurations besides the classical round SPK.Köhli et al established a correlation between the distribution of the apical vesicles and the hyphal growth rate of Ashbya gossypii.More recently, Dee et al. suggested that the distribution pattern of the apical vesicles is a specific trait of the major fungal phyla.Given the diversity of the fungal species they analyzed, which includes a diversity of growth rates, it is hard to conclude those patterns are the exclusive solution to a kinetic secretory necessity as Köhli et al suggested.Different laboratories have undertaken the challenge of elucidating the process by which the secretory vesicles are generated and transported to specific cell regions.The pioneering research in S. cerevisiae uncovered the mechanistic role of many of the regulatory molecules involved in the traffic of secretory vesicles.In contrast to budding yeast, which experience transient polarized growth, the growth of fungal hyphae occurs continuously at a very high rate at their tips.This suggests important divergences of mechanisms of vesicle traffic between filamentous and yeast fungal forms.Vesicle secretion in all eukaryotic cells is orchestrated by the coordinated activity of small GTPases from the Rab family, protein coats, molecular motors, tethering factors and SNAREs Receptor).Secretory proteins and their adaptors/receptors, recruited at specific foci of the donor membrane organelle, are enclosed in COPII-coated vesicles that initially bud off on their way towards the acceptor membrane.The traffic of these secretory vesicles, assisted by molecular motors and the cytoskeleton, is finely coordinated by the action of Rab GTPases that act as signaling molecules ensuring the arrival of the vesicles to the specific downstream compartment.Tethering and fusion of the vesicles with the compartments are facilitated by the complex interplay between Rabs, monomeric and multimeric tethering proteins, and SNAREs, which facilitate the final delivery step of the vesicle cargoes.The activity of Rab proteins oscillates depending on the form of guanosine nucleosides they are bound to.GEFs and Guanine Activating Proteins are implicated in the exchange of GDP for GTP and the hydrolysis of GTP to GDP, respectively.When bound to GTP, Rab proteins undergo a conformational change, which is required for their transient association to secretory carriers, whereas binding to GDP provokes the opposite effect, inducing the release of the Rabs from vesicles membranes.Anterograde traffic of vesicles within the secretory pathway is mainly coordinated by three Rab proteins: Rab1, Rab8, and Rab11.Rab1 is involved in early events of anterograde transport of vesicles in fungi, plants, insects and humans.For instance, Ypt1p, the Rab1 mammalian ortholog in S. cerevisiae, is involved in three important steps of the secretory pathway: 1) anterograde traffic of vesicles from ER to early Golgi compartments; 2) Intra-Golgi vesicles transport; and 3) early endosomes to late Golgi trafficking.In filamentous fungi, the functions of Ypt1/Rab1 have apparently diversified.In N. crassa, YPT-1 was observed at the core of the SPK, coinciding with the spatial distribution of all CHSs, and suggesting its participation in the traffic of secretory vesicles to the hyphal tips.A similar localization for RabO was described for A. nidulans.In addition, co-immunoprecipitation followed by LC/MS of CHS-1, CHS-4, and CHS-5 detected YPT-1 as one of the putative interacting proteins.Furthermore, YPT-1 sedimented in fractions with a density range similar to the density of fractions with high CHS activity.Together, the evidence points to YPT-1 being the Rab involved in the traffic of chitosomes in N. crassa.Whether the traffic of CHS enzymes to the SPK is exclusive to Rab1 orthologs in other fungal systems is still unknown.In A. nidulans, the apical recycling of ChsB, the fungal CHS-1 ortholog, is assisted by Rab11 from the TGN.Although the Rab1 orthologs in N. crassa and A. nidulans share similarities such as their presence at the SPK and their requirement for cell viability, the stratification of this Rab was identified only in N. crassa.In addition to its apical distribution, YPT-1 was detected co-localizing with bona fide markers of the early and late Golgi markers such as USO-1 and SEC-7, respectively.In vivo analysis revealed a transient co-localization of YPT-1 with both markers.The dynamic distribution of YPT-1 within Golgi compartments and its conspicuous arrival to the core of the SPK unfolded an additional role of the traffic coordination of this Rab protein that might be essential for the transport of the cell wall nanomachinery.Studies conducted in the budding yeast have shown that, before being directed to the cell surface from late Golgi, the budding of vesicular carriers is assisted by Ypt31p and Ypt32p, both orthologs of Rab11.The membrane association of Ypt31/32p with newly formed post-Golgi vesicles participate in the activation of Sec4p through the recruitment of its GEF factor Sec2p.In A. nidulans, the association of its Rab11 ortholog, RabE, at late Golgi cisternae during their maturation process switched its identity into post-Golgi membrane carriers that accumulated at the SPK.In vivo localization experiments of YPT-31, the N. crassa Rab11 orthologue, revealed the distribution of the fluorescently tagged YPT-31 at the macrovesicular layer of the SPK.This particular distribution was confirmed when either the Rab GTPase YPT-1 or the Exocyst subunit EXO-70 tagged with GFP were co-expressed with the tDimer-2-YPT-31.YPT-31 and YPT-1 did not share the same distribution pattern at the SPK, whereas with EXO-70 a partial co-localization was observed.This particular localization of YPT-31 at the SPK resembled the spatial arrangement of GS-1 and FKS-1.These observations, although provide only circumstantial evidence, relate this Rab in N. crassa with the GSC.In contrast to YPT-1, YPT-31 was not detected at Golgi cisternae, suggesting its exclusive participation in post-Golgi trafficking steps of the secretory vesicles.Analysis of vesicles dynamics through FRAP experiments revealed that YPT-31 vesicles arrive at very high rates at the hyphal tip.The fluorescence recovery of YPT-31 at the SPK was similar to the recovery rates of Rabs in other fungal systems, whose growth rates are significantly lower compared to N. crassa.These similarities suggest that the rate of arrival of YPT-31 associated vesicles to the SPK is independent of the hyphal growth rate.One would expect that the rate of arrival of vesicles with cell wall synthesizing components correlates with the hyphal growth rates but based on the above-mentioned FRAP analyses it seems that downstream regulators synchronize the discharge of the vesicles.From earlier studies it has been demonstrated that hyphal elongation rates occur in pulses of growth.Superresolution microscopy analysis in cells of A. nidulans revealed that secretory vesicles carrying the Class III ChsB are discharged from the SPK as clusters in an intermittent mode.In S. cerevisiae, the last steps of the anterograde traffic of vesicular carriers are linked to the activity of the Rab8 ortholog Sec4p.Sec4p is distributed at sites of polarized secretion where its interaction with the exocyst complex subunit Sec15p assists in the tethering events crucial for fusion to the plasma membrane.The subcellular distribution of Sec4 in fungal hyphae has been described elsewhere and is usually located at the SPK.In N. crassa the distribution of SEC-4 was detected at the outer stratum of the SPK resembling the distribution patterns observed for YPT-31, GS-1 and FKS-1.This conspicuous allocation of SEC-4 labeled vesicles not only suggests that this Rab protein is associated with the traffic of a specific population of secretory vesicles but also participates in the transport of GSC.Being SEC-4 an effector of YPT-31 it was not surprising to detect both proteins at the same SPK layer in N. crassa.This SEC-4 stratification was not observed in fluorescently tagged Sec4 of other fungal orthologs, despite ultrastructural analyses of hyphal tips that have revealed the presence of at least two populations of vesicles.Although further studies are needed to confirm that SEC-4 assists the last traffic steps of the GSC before secretory vesicles are tethered to the target plasma membrane, the evidence so far suggests that in N. crassa SEC-4 is involved in this late secretory step of the complex.Further experimental work needs to be performed to elucidate how the Rab proteins coordinate the pre-exocytic events at the SPK at the molecular level.The orchestrated exocytosis of vesicles at active growth regions of the hyphal tips is a crucial step required for successful cell wall biosynthesis.After reaching the hyphal tip and accumulating at the SPK, a tethering process is required to dock the secretory vesicles to specific regions of growth.The tethering of vesicles is assisted by the exocyst, a conserved multimeric protein complex comprised of eight subunits: Sec3, Sec5, Sec6, Sec8, Sec10, Sec15, Exo70 and Exo84.It has been proposed that the subunits of the complex form two sub-complexes: one subgroup attaches to the membrane of the vesicles, whereas the other is docked to the PM.However, recent biochemical and structural analyses of the exocyst in S. cerevisiae have shown that the complex exists mainly as a stable assembly comprising all eight subunits.The architecture of the exocyst in vivo revealed a putative mechanism of how this tethering complex assists in the contact of membranes between the secretory vesicle and the PM.The results of recent single-particle Cryo-EM coupled with chemical cross-linking MS analyses has allowed a model for the assembly of the exocyst to be proposed, where the hierarchical interaction of dimeric pairs results in two higher-order structures forming the tetrameric subcomplex I and II.In particular, the CorEX motif of the Sec3 subunit was found important for recruitment of the other seven subunits and the tethering of secretory vesicles.Live cell imaging of N. crassa hyphae revealed two distribution patterns of the exocyst subunits at the hyphal tips; SEC-5, -6, -8 and -15 finely extend over the apical PM surface whereas EXO-70 and EXO-84 mostly accumulate at the outer layer of the SPK.SEC-3 displayed a distribution similar to both of the above-mentioned localization patterns.In A. gossypii, the localization of exocyst components is dependent on the hyphal growth rate.AgSec3, AgSec5, and AgExo70 accumulate as a cortical cap at the tip of slow-growing hyphae, whereas they localize at the SPK in fast-growing hyphae.In C. albicans, exocyst components Sec3, 6, 8, 15, Exo70, and Exo84 stably localized to an apical crescent.In A. nidulans, SECC, the homologue of S. cerevisiae Sec3p, was localized in a small region of the apical PM, immediately anterior to the SPK.In A. oryzae, AoSec3 was localized to cortical caps at the hyphal tip as in A. nidulans but was also found in septa.The singularities in the subcellular distribution of the exocyst complex subunits in N. crassa and other species suggest specific regulatory mechanisms of the exocyst during the last secretory steps of the cell wall biosynthetic machinery.The distribution of EXO-70 and EXO-84 subunits at the SPK outer layer in N. crassa was similar to that observed for the Rab GTPases YPT-31 and SEC-4, and the GSC.This distribution pattern strongly indicates a connection between the exocyst, regulatory proteins and the cell wall synthesis machinery.This preliminary evidence provides hints for upcoming attempts to explore the molecular mechanisms involving both groups of regulators, namely the exocyst and Rab GTPases, in the traffic of cell wall enzymes.The crucial role that the exocyst complex might have in the traffic or fusion of the biosynthetic cell wall machinery was more evident by the drastic morphology defects showed in the studies of N. crassa sec-5 mutants.Analysis of cryo-fixed and freeze-substituted TEM images revealed that hyphae of sec-5 mutants abnormally accumulated macrovesicles at the hyphal tips indicating a non-functional exocyst complex resulting in an aberrant compact morphology, hyper branching, and the absence of a SPK in FM4-64 stained hyphae.Although there is a correlation between the localization of the complex subunits and the identified cellular and molecular apical apparatus, the differential arrangement of the subunits at apical regions is still intriguing.Is this feature related to the regulation of a specific population of vesicles or does this localization suggest that all the vesicles of the SPK associate with the exocyst components at the macrovesicular area?,Further experimental work is still necessary to understand the mechanistic role of the complex at the hyphal apex to gain insights into the cell wall biosynthesis in N. crassa.During the sophisticated polarized growth of fungal hyphae, the continued expansion of the wall is necessary, and this process requires the coordinated action of the cytoskeleton in the transport of vesicles, proteins and organelles.In N. crassa, actin localizes at the core of the SPK, around the endocytic subapical collar forming small patches, and in association with septum formation.Actin inhibitor studies using Latrunculin A and Cytochalasin A demonstrated that the actin cytoskeleton is required for the correct localization of CHS-1 and CHS-4, β-1,3-endoglucanases BGT-1 and BGT-2, and GS-1.CHSs with a MMD, including CsmA and CsmB of A. nidulans, Wdchs5 of Wangiella dermatitidis, and Mcs1 of U. maydis require the actin cytoskeleton for proper localization.However, the MMD of A. nidulans CsmA and CsmB has a role as an anchor to the PM rather than in transport.Similarly, in U. maydis, Msc1 MMD is not important for its mobility, which is dependent on myosin-5 and kinesin-1, but instead acts as a tether that supports the fusion of Msc1-carrying vesicles to the PM.Moreover, class VII CHSs and GS glucan synthase transport relies on Mcs1.The microtubule cytoskeleton is important for hyphal morphogenesis and directionality in filamentous fungi.In N. crassa, two classes of MT-dependent motors, the minus end-directed dynein and the plus end-directed kinesins, are involved in the positioning of organelles and transport of membranes.In N. crassa, conventional kinesin-1 KIN-1 is required for vesicular transport.Nevertheless, in N. crassa Δkin-1 mutants, the stability of CHS-1 at the SPK was affected but not its long range transport, suggesting that the MT cytoskeleton is not essential for the delivery of chitosomes.Significant progress has been made in the last couple of decades relating to the understanding of the molecular and cellular processes that precede the building of the cell wall in N. crassa and other filamentous fungi.It is quite remarkable that in spite of having analogous cell wall synthetic machineries, fungal species have evolved distinct secretory mechanisms.The structural diversity of the SPK across several fungal taxa is perhaps one of the manifestations of these differences and it could be requirements of each fungal species.A wide phylogenetic and structural analysis of the fungal secretory machinery is necessary to elucidate the molecular basis of such differences.The great advancement of microscopy has increased the temporal and spatial resolution of live imaging.Attaining real-time super resolution imaging will be key to elucidate the routes of traffic of vesicles in and out of the SPK along their associated cytoskeletal tracks.Because of their evolutionary implications and consequences in cell wall building and architecture, the differential secretion of the main cell wall synthesizing enzymes, CHS and GSC, needs to be resolved.Moreover, it is necessary to decipher the composition of chitosomes, to determine how many CHS subunits are contained in one chitosome and, more importantly, whether one chitosome contains more than one class of CHS.This information, together with the identification of the mode of activation and inactivation of CHS and the cargo receptors/adaptors for all CHS, will provide a more detailed panorama of the secretion and regulation of these central players in cell wall synthesis.The authors certify that they have NO affiliations with or involvement in any organization or entity with any financial interest, or non-financial interest in the subject matter or materials discussed in this manuscript.
The fungal cell wall building processes are the ultimate determinants of hyphal shape. In Neurospora crassa the main cell wall components, β-1,3-glucan and chitin, are synthesized by enzymes conveyed by specialized vesicles to the hyphal tip. These vesicles follow different secretory routes, which are delicately coordinated by cargo-specific Rab GTPases until their accumulation at the Spitzenkörper. From there, the exocyst mediates the docking of secretory vesicles to the plasma membrane, where they ultimately get fused. Although significant progress has been done on the cellular mechanisms that carry cell wall synthesizing enzymes from the endoplasmic reticulum to hyphal tips, a lot of information is still missing. Here, the current knowledge on N. crassa cell wall composition and biosynthesis is presented with an emphasis on the underlying molecular and cellular secretory processes.
445
On the crystallography and composition of topologically close-packed phases in ATI 718Plus®
The morphology of η phase precipitates in 718Plus depends on the thermo-mechanical treatment to which the alloy has been subjected.Often it occurs in colonies in a Blackburn orientation relationship γ ||η with one of the adjacent grains.The η phase precipitates then grow in a disc-like morphology on the γ matrix planes towards the grain interior and a detailed mechanism has been suggested .It is also common to observe small fractions of fine δ phase sheets within η phase discs .In the context of this study it is important to note that after prolonged annealing η precipitates can grow significantly in size, spanning across entire grains as well as growing in aspect ratio to form blocky precipitates with distinct facets .Further, both η and γ′ compete for the same alloying elements, mainly Al, Ti and Nb, resulting in γ′ precipitate free zones around η precipitates and the suggestion that γ′ might play a role in the formation of η .Considering mechanical performance, the η phase is critical for effective grain boundary pinning in sub-solvus forging and whether, for example, fine or blocky precipitates have formed, and to what extent, will have a strong effect on mechanical properties .Topologically close-packed phases may form in most nickel-based superalloys when exposed to conditions of high temperature for long periods of time or during solidification and welding .Typically, TCP phases are composed principally of the elements Ni, Cr, Co, Mo, and W, and the basic crystallography of common TCP phases is summarised in the supplementary information.The structures are relatively complex but at the simplest level consist of pseudo-hexagonal layers of atoms stacked to form sites with coordination numbers as high as 16 accommodating atoms of widely different sizes.In spite of this, the packing efficiencies are comparable to those of ideal close-packed structures.The TCP phases are potentially detrimental if they occur in significant volume fractions such as to cause the depletion of solute atoms which otherwise aid solid solution strengthening ; they cannot be used as strengthening phases themselves due to low number density .The occurrence of TCP phases in 718Plus has so far been little studied and those studies that do report TCP phases in 718Plus have largely neglected their crystallography.Instead, the TCP phases have been identified based mainly on composition, which makes it unclear as to exactly which TCP phases are observed.In terms of the conditions leading to the observation of TCP phases in 718Plus this has mostly been attributed to solidification of the alloy after casting or to welding .Interestingly, we note that no TCP phases were reported in a study on long term stability of 718Plus by Radavich et al. .In this work, we report the observation of TCP phases in 718Plus subjected to high temperature annealing and characterise TCP precipitates both chemically and crystallographically using transmission electron microscopy and diffraction.718Plus samples were provided by Rolls-Royce Deutschland Ltd. & Co KG.The initial ingot material was triple vacuum melted by Allegheny Technologies Inc. for high cleanliness and forged into billet product.This was followed by subsolvus forging to form a black forging and heat treatment A performed by Otto Fuchs KG.Further, long-term anneals B-D were performed by the authors, on separate samples, in order to examine microstructural stability.Electron-transparent thin film specimens were prepared as follows.First, samples were extracted from the mid-radius of the heat-treated forgings using electric discharge machining.Slices of 300–500 μm were cut using a saw and 3 mm diameter discs were produced using EDM.These discs were ground to approximately 200 μm thickness and subjected to electrolytic twin-jet polishing using a Tenupol and 10 vol% perchloric acid solution at −5 °C.TEM images and small angle convergent beam electron diffraction patterns were acquired using Philips CM30 and JEOL 200CX microscopes operated at 200 kV.Most images and diffraction patterns were acquired using Gatan 2K digital cameras although selected images were acquired on photographic film.Scanning transmission electron microscopy was performed using an FEI Tecnai Osiris.The machine was operated at 200 kV and energy dispersive X-ray spectrum images and annular dark-field images were acquired simultaneously using a scan step size of 3 nm.The specimen was not tilted and the gun lens was adjusted to produce a large current for increased X-ray generation, which allowed a modest 200–250 ms dwell time per pixel.Scanning precession electron diffraction, in which a PED pattern is acquired at every position in a scan, was performed on a Philips CM300 FEGTEM operated at 300 kV.The scan and simultaneous precession of the electron beam was controlled using a NanoMegas Digistar system, combined with the ASTAR software package .A convergent probe was used, typically with a convergence semi-angle ca. 1 mrad and a precession angle ca. 9 mrad, aligned as described recently .Scans were acquired with a step size of 10 or 20 nm depending on the region of interest.The PED patterns were recorded using a Stingray CCD camera to capture the image on the binocular viewing screen with an exposure time of 40–60 ms. The recorded patterns were corrected for geometric distortions prior to any further analysis.TCP precipitates were observed in samples produced following all four heat treatments.In each sample, bright field images, as shown in Fig. 1, were obtained from approximately 20 TCP particles to assess sites of occurrence within the microstructure, their morphology and size.Across all samples, it was found that almost all occur at γ-η interfaces and often extend along specific η facets.In a number of cases, it appears that the TCP particles may have nucleated on an η precipitate and continued to grow towards a triple junction or along a grain boundary.TCP particles were also observed at some grain boundaries although it remains possible that these formed initially at a γ-η phase boundary, that was removed during TEM sample preparation.The particles are typically blocky in morphology and exhibit internal faulting.This is in contrast to other alloys in which the TCP particles are plate-like , may be a result of different misfit.In the as-received condition, particles were measured to have their largest dimension in the range 55–230 nm.After further annealing, the particles had grown size ranges: 50–320 nm, 80–1050 nm and 120–1170 nm for conditions B-D, respectively.In the remainder of this work, TCP precipitates are studied in sample D, which was produced following the highest temperature during the additional annealing step.The composition of C14 Laves and σ phase precipitates was studied using STEM-EDX.Four C14 Laves phase, and two σ phase, precipitates were studied and an example of each, including elemental maps, is shown in Fig. 5.Cr, Fe, Co and Mo appear to partition strongly to both C14 Laves and σ phases compared to the η phase.Some elements appear as though they may be enriched around the TCP precipitates; however, this increase in X-ray intensity is likely to be due to thickness variations in the surrounding γ matrix and not chemical segregation.The phase compositions were determined using data processing methods implemented in the HyperSpy python library as follows.The data was de-noised first using singular value decomposition and a mask based on the Mo-Kα X-ray line was used to isolate pixels within the TCP particles.Spectra associated with pixels in the particles were then summed to obtain a representative spectrum, which was modelled using a Gaussian peak at all relevant X-ray lines to extract accurate intensities.Cliff-Lorimer quantification was then applied using k-factors obtained with respect to a Cu reference for the particular instrument used.The composition of each TCP precipitate obtained this way is shown in Table 3.The analysis shows an evident partitioning of Cr, Fe, Co and Mo and that both phases in this alloy contain small quantities of Al and Ti.C14 Laves phase shows ca. 5 at.% of Nb and W whereas the σ phase precipitates have significantly higher levels of Cr but very little Nb or W.The values obtained in this work for both C14 Laves and σ phases are self-consistent to within ±1 at.% and trends in enrichment appear consistent with values reported in the literature for TCP particles in 718 .It should be noted that the W concentration may be unreliable due to peak overlap but since it is a small contribution this has a small effect on the other values.Crystallographic relationships between five σ particles and their surrounding microstructure were compared by plotting the disorientation between phases in the appropriate fundamental zone of axis-angle space.For each phase combination, 50 disorientations were calculated from the orientations associated with 50 randomly selected pixels in each phase.The calculated disorientations form a cluster in axis-angle space ca. 2° radius, indicative of the level of uncertainty of the analysis.This is consistent with assessments of orientation precision made for spot based orientation determination .The orientation relationships described thus far are marked with bounded yellow circles.The crystallographic relationships between four C14 Laves phases and their surrounding microstructure are illustrated in axis-angle space in Fig. 11 following a procedure similar to that described for the σ precipitate in the previous section.Here, no preferential crystallographic relationship is observed between the C14 Laves phase precipitates and either the η laths or γ matrix, in contrast to the findings for σ precipitates.Although a basal plane relationship may have been expected for these two hexagonal systems, the 6% lattice mismatch is likely to preclude any such simple relationship.We have shown that TCP phases form in 718Plus after relatively short annealing times.Previous experimental studies , have not seen this, perhaps because of the relatively small size and location of the TCP crystallites at the γ-η interface making them a challenge to identify.Laves and σ phase formation in 718Plus was recently predicted by thermodynamic modelling using JMatPro 6.0 in a study on η phase precipitates .No experimental evidence was presented in that work and our findings are consistent with this prediction.Of the TCP precipitates studied, approximately half were C14 Laves phase and half σ phase, although, one C36 Laves phase precipitate was also observed.These precipitates often occur at the γ-η interface on the facets of η particles, which may be expected due to chemical segregation associated with the η particles.We find that the TCP phases form with a narrow compositional range, similar to the behaviour of binary Laves phases .The enrichment trends are also in line with compositions found for TCP particles in alloy 718.No evidence has been found for α-Cr, the predominant co-precipitate of δ in 718 , which is surprising as Cr levels in 718Plus are relatively high.Which TCP phases form is likely to be a function of the availability of specific elements.The σ phase, for example, rejects Nb and Ni more so than the C14 Laves phase.It is therefore less likely to find suitable conditions at a matrix grain boundary but rather at the γ-η phase boundary, in line with our observations here showing all TCPs found at grain boundaries were C14 Laves phase.It is also assumed that stable nuclei will be sensitive to the degree of coherency at the interface, especially σ phase, which has been shown to grow in γ according to a distinct orientation relationship, matching up the close-packed planes.This preference has been found for all sites of interest studied.As has been pointed out before by Rae et al. the lattice mismatch has a strong effect on the morphology of TCP precipitates in the γ matrix.They show that for a given alloy composition that leads to a good match σ phase forms sheets in the matrix.For other alloy composition with an increased mismatch, blocky precipitates are formed.In the light of these considerations, it seems plausible to assume that, in 718Plus, the σ phase has a greater mismatch with γ, as σ particles have a blocky morphology.This might also explain why the orientation relationship is not fulfilled closely, with deviations of up to about 3°.The orientation relationship described between η and σ seems to be a consequence of a γ-η orientation relationship which has likely formed during forging and pre-solution treatment and therefore before any σ formation.If a σ precipitate grows on any of the four equivalent planes in γ, then there is a 25% chance that it will also be in an orientation relationship with η, which is consistent with the ratio found in this study.In line with the above, in both occurrences of the orientation relationship, the η basal plane facet was the one occupied by σ.This suggests that certain η facets promote certain disorientations.However, the more significant orientation relationship that σ has is with γ as we have consistently found the same orientation relationship, which is also intuitive as σ grows into the γ grain.Coherency, or the lack thereof, might also explain why no strong orientation relationship has been found for the C14 Laves phase with its surrounding microstructure.The basal plane mismatch of ca. 6% might be too high for matching up basal planes between C14 Laves and η phase.The same would apply to C14 Laves and the close-packed γ matrix planes as the mismatch between η and γ is very small.Initially the finding of two C14 Laves particles with the same orientation on opposite sides of an η particle was assumed to be indicative of a strong orientation relationship.However, additional sites of interest did not confirm this trend and so it may be that the two particles had been connected in the original sample, but that this connection was cut during the etching procedure.The observation of TCP phases after relatively short annealing times has important implications for the prediction of constituent phases and long-term microstructural stability of 718Plus.It also highlights that the thermomechanical history of the alloy influences its stability and the morphology and extent of η phase formed , which can be altered through a change in processing route.As the γ matrix is depleted of solid solution elements by the formation of TCPs, it is reasonable to expect that there may be a detrimental effect on mechanical properties.The following conclusions can be drawn from this work:TCP phases were found in annealed conditions of the commercial nickel-based superalloy 718Plus.The crystal structure and chemistry of TCP phases was found to be consistent with σ and C14 Laves phase.Both phases are enriched in Cr, Co, Fe, Ni and Mo with slightly higher levels of Cr in σ and additional Nb enrichment only in the C14 Laves phase.C14 Laves and σ phase were found to nucleate mainly at distinct facets of thick η phase particles which are abundantly present in the microstructure examined.The σ phase was found to nucleate with distinct disorientations with respect to the η phase, studied using SPED.According to SPED data the orientation of the surrounding γ phase also has a strong impact on the growth of σ phase.BF and DF imaging confirmed the presence of planar faults in both TCP phases.At least two deformation mechanisms found in C14 Laves in 718Plus are noteworthy with defects giving rise to streaking along the *, * and * directions.The second and third of these hint towards previously unreported deformation modes, consistent with deformation on the and planes, respectively.
ATI 718Plus® is a nickel-based superalloy developed to replace Inconel 718 in aero engines for static and rotating applications. Here, the long-term stability of the alloy was studied and it was found that topologically close-packed (TCP) phases can form at the γ-η interface or, less frequently, at grain boundaries. Conventional and scanning transmission electron microscopy techniques were applied to elucidate the crystal structure and composition of these TCP precipitates. The precipitates were found to be tetragonal sigma phase and hexagonal C14 Laves phase, both being enriched in Cr, Co, Fe and Mo though sigma has a higher Cr and lower Nb content. The precipitates were observed to be heavily faulted along multiple planes. In addition, the disorientations between the TCP phases and neighbouring η/γ were determined using scanning precession electron diffraction and evaluated in axis-angle space. This work therefore provides a series of compositional and crystallographic insights that may be used to guide future alloy design.
446
Complete genome sequence of canine astrovirus with molecular and epidemiological characterisation of UK strains
Astroviruses are small non-enveloped, positive sense RNA viruses with a wide host range.Astroviruses are classified into two genera; the Mamastrovirus genera includes astroviruses isolated from humans, pigs, cattle, cats and dogs, whereas astrovirus isolates from birds are categorised into the Avastrovirus genera.Astroviruses were first identified in 1975 in the stools of children with diarrhoea, and are now estimated to cause up to 10% of gastroenteritis cases in children worldwide.Canine astrovirus was first described in the USA, following the identification of star-shaped particles in diarrhoeic stools from a litter of beagles.Later studies have identified CaAstV in Italy, France, China, Korea and Brazil.The genome of astroviruses is typically 6.4–7.3 kb and divided into three open reading frames, ORF1a, ORF1b and ORF2 with a 5′ untranslated region and a 3′ poly-A tail.ORF1 codes for the non-structural proteins and ORF2 encodes the capsid precursor protein.The complete coding sequence of ORF2, and a partial sequence of ORF1b has previously been determined for a number of CaAstV isolates, but to date, no full-length genome sequence of CaAstV has been reported.The purpose of this study was to determine the prevalence of CaAstV in the UK dog population and to obtain where possible the complete genetic sequence of circulating strains.Four CaAstV strains were identified in dogs showing clinical signs of gastroenteritis and sequencing of ORF2 found significant genetic diversity between strains.The first full-length sequences of two CaAstV strains was subsequently determined.Stools were diluted to a final concentration of 10% w/v in phosphate-buffered saline, pH 7.2, and solids were removed by centrifugation at 8000 × g for 5 min.Viral nucleic acid was extracted from 140 μl of each clarified stool suspension using the GenElute™ Mammalian Total RNA Miniprep Kit according to the manufacturers’ instructions.An internal extraction control was added during the nucleic acid extraction process as previously described.cDNA was generated by reverse transcription using MMLV reverse transcriptase enzyme and random hexamers with the reaction performed at 42 °C for 1 h, followed by an inactivation step at 70 °C for 10 min.qPCR was performed using primers targeting a conserved region of the viral RNA-dependent RNA polymerase as previously described.The sequences of primers used for PCR in this study are presented in Table 1.qPCR reactions were prepared using the MESA Blue qPCR MasterMix Plus for SYBR Assay.Briefly, 2 μl cDNA was mixed with 2× MasterMix and 0.5 μM primers, then incubated at 95 °C for 10 min.The thermal cycle protocol used with a ViiA7 qPCR machine was as follows: 40 cycles of 94 °C, 15 s; 56 °C, 30 s; 72 °C, 30 s, followed by generation of a melt curve.Viral genome copy number was calculated by interpolation from a standard curve generated using serial dilutions of a standard DNA amplicon cloned from a positive control.The limit of detection was determined by the lowest dilution of control standard CNA reproducibly detected in the assay.All samples were additionally screened for the presence of canine parvovirus, canine enteric coronavirus and canine norovirus using a 1-step qRT-PCR protocol.All samples positive to CaAstV by qPCR were subjected to conventional PCR to confirm the presence of CaAstV and to enable genome sequencing."The capsid of all positive samples was amplified from cDNA synthesised using SuperScript II Reverse Transcriptase according to the manufacturer's protocol with 0.5 μM AV12 primer.The PCR reaction was performed using KOD hot start polymerase, with reverse primer s2m-rev, and the forward primer 625F-1 from the original qPCR assay.The amplification programme consisted of an initial 5 min step at 95 °C, followed by 35 cycles with 95 °C for 20 s, 58 °C for 30 s and 72 °C for 90 s.A final elongation step at 72 °C for 5 min was performed, followed by chilling to 4 °C.PCR products were subsequently cloned into pCR-Blunt™ using the Zero Blunt PCR Cloning Kit according to the manufacturers protocol.Sequencing of the 5′ and 3′ regions of the plasmid insert was performed using pCR-Blunt™ specific primers by the University of Cambridge Biochemistry DNA Sequencing Facility.Sequencing primers for the central region of insert were then designed based on the primary sequence data to give a 200 nt overlap with each predecessor, and a second round of sequencing reactions was performed.The complete capsid nucleotide sequence generated using this method was then confirmed by sequencing of PCR products generated from cDNA directly."PCR amplification of ORF1a and ORF1b was achieved using cDNA synthesised using SuperScript II Reverse Transcriptase according to the manufacturer's protocol with 626R-1 reverse primer.A forward primer was designed approximately 300 bp from the 5′ end of ORF1a, based on a small genome fragment fortuitously amplified by mis-priming in a preliminary screen.The Zero Blunt PCR Cloning Kit was then used to clone the PCR product and sequencing performed using the protocol described above.Primers were designed sequentially to enable sequencing of the entire cloned product in both the 5′–3′ and 3′–5′ direction, with 200 nt overlaps between segments.Sequencing of PCR products directly was finally performed to ensure no mutations has arisen as a result of cloning.Each nucleotide was sequenced at least once in each direction, with an average coverage of 3.Sequences at the extremity of the viral genome were determined using 5′ and 3′ RACE utilising a kit according to the manufacturers instructions, and gene specific primers as listed in Table 1.Direct sequencing was then performed as above.Software used for sequence analysis included Vector NTI, BLAST, ClustalW2 and Protein Variability Server software.Evolutionary analysis was conducted using MEGA version 6.Stool samples and clinical data were collected from 67 dogs with severe gastroenteritis admitted to veterinary clinics or an animal shelter distributed across the UK between August 2012 and June 2014.Control samples were collected from 181 dogs without signs of gastroenteritis, from either veterinary inpatients with non-gastrointestinal illness, or dogs at boarding kennels or belonging to veterinary staff.In total 56 breeds of dog were represented.The mean age of dogs with gastroenteritis was 4.3 years and the mean age of control animals was 6.1 years.Nucleic acid extraction and qPCR were performed on 248 stool samples.Samples were systematically tested for the presence of CaAstV using a SYBR-based qPCR screen.In addition, screening for CPV, CECoV, CNV and the internal extraction control was also performed using a 1-step qPCR protocol.CaAstV was detected in a total of four samples.All four positive dogs were showing signs of gastroenteritis, whereas CaAstV was not detected in any dogs without gastroenteritis.The difference between the prevalence of CaAstV in dogs with gastroenteritis and prevalence in dogs without gastroenteritis was statistically significant.CPV was detected in 10/67 dogs with gastroenteritis, and CECoV in 2/67 of the same group.Two of the four CaAstV positive dogs were also co-infected with CPV, but no co-infections with CECoV and CaAstV were identified.The age range of CaAstV positive dogs was from 7 weeks to 7 years, with a mean age 2.1 years.Table 2 summarises the clinical information and viral screening results of the four CaAstV positive dogs.The complete coding sequence of the four CaAstV strains identified was determined using conventional PCR and cloning techniques.The ORF2 nucleotide and amino acid sequences of these strains were aligned using ClustalW2.The overall nucleotide identity between strains was 77.1–81.1%, whereas the amino acid identity was 79.3–86.3%.The four sequences were deposited in the GenBank database and assigned accession numbers KP404149–KP404152.A number of studies have reported the N-terminal and C-terminal regions of astrovirus capsids to be relatively conserved, whereas the central region is hypervariable.The human astrovirus capsid protein has previously been divided into three regions; the N terminus, a variable central region, which includes a hypervariable section from 649 to 707, and a conserved C terminus.An analogous approach for the CaAstV capsid was taken by Zhu et al., who divided the capsid into three regions for analysis: amino acids 1–446, 447–730, and 731–end.The four capsid sequences derived from this study have been analysed according to the latter scheme, and sequence identity compared.Sequence analysis of the three regions clearly shows that the majority of sequence variation is concentrated in region II.A 24 nt deletion was identified in samples 2–4 in the 5′ end region II, which has previously been reported in Chinese CaAstV strains.It is not possible to predict the location of this deletion on the capsid structure as it is beyond the region of the astrovirus capsid spike for which the crystal structure has been solved.However, sequence alignment with the human astrovirus type 8 capsid shows this deletion to be located downstream of the caspase cleavage site required for virion maturation, which truncates the full length capsid protein into the mature VP70 form.Therefore it is predicted that the 24 nt deletion will not alter the mature virion.Evolutionary analysis of the four CaAstV sequences from the study, alongside the seven previously reported full-length CaAstV capsid sequences are presented in Fig. 2.This analysis indicated that the UK strains do not cluster, contrary to a previous study, which analysed CaAstV strains from a single city.Each UK strain is distinct from each other, and whereas one strain clusters most closely with the Chinese strains, the remainder group with strains identified in Italy over a number of years.The complete CaAstV genome was determined from two samples, isolated from a 7-week-old crossbreed dog and a 7-year-old Border Collie.The total length of CaAstV Gillingham/2012/UK is 6600 nt and CaAst/Lincoln/2012/UK is 6572 nt.Each genome encodes three ORFs; ORF1a, ORF1b, and ORF2 flanked by a 5′ UTR, and a 3′ UTR plus a poly-A tail.In human astroviruses, the 5′ UTR is 85 nt in length, whereas data from both CaAstV strains would indicate that the CaAstV 5′ UTR is 45 nt.The 83 nt 3′ UTR of human astrovirus is identical to 3′ UTR in CaAstV Lincoln/2012/UK, whereas the 3′ UTR of CaAstV Gillingham/2012/UK is 2 nt shorter.The nucleotide composition of both CaAstV strains is 29% A, 22% G, 26% T and 23% C.The G/C composition is 45%.The ORF1a of non-canine astroviruses encodes a serine protease.Sequence alignment of CaAstV with astroviruses of other species shows a high degree of conservation in the predicted serine protease region.This is especially pronounced in the regions around the proposed catalytic triad of the serine protease.ORF1a also encodes the viral genome-linked protein, VPg.CaAstV VPg is predicted to start at aa 656, at a conserved QK cleavage site and is 90 aa in length.In other astroviruses, it has been proposed that the C-terminal VPg cleavage site is coded by Q, and the presence of a QS dipeptide at the same site in CaAstV is consistent with this prediction.The amino acid motif KGKK is conserved at the N-terminal end of VPg sequences from both astroviruses and caliciviruses, and this is also identifiable in the CaAstV genome.Another conserved VPg motif is TEXEY, with mutagenesis studies indicating that the Y residue covalently links VPg to viral RNA.Analysis of the CaAstV ORF1a sequence also identifies the conserved TEXEY motif at 684–688, thus this tyrosine is predicted to covalently link to the RNA genome.However, the CaAstV sequence diverges slightly from the other mamastroviruses studied, in that X of the motif corresponds to K, whereas this is E/Q in all other mamastroviruses.A −1 ribosomal frameshift site between ORF1a and ORF1b, present in human astroviruses, is also conserved in CaAstV.This translational frameshift is directed by the slippery heptamer sequence AAAAAAC at position 2666 in the CaAstV genome.A stem loop structure is predicted downstream of the slippery sequence, as shown in Fig. 4.The slippery sequence and downstream stem loop are highly conserved amongst mamastroviruses.The 3′ end of ORF1a overlaps with ORF1b by 49 nucleotides.This is shorter than the 71 nt overlap reported for human astroviruses.ORF1b is predicted to code for an RNA dependent RNA polymerase.The CaAstV sequence contains a YGDD motif at aa 1252, common to RdRps of a variety of RNA viruses, supporting this conclusion.ORF1b of CaAstV aligns with the RdRp of human astroviruses with 58–60% aa identity.There is a similar identity to feline astrovirus and porcine astrovirus.CaAstV aligns most closely with the Californian sealion astroviruses, though incomplete Californian sea lion astrovirus sequences were available.As for all other astroviruses studied to date, an overlapping reading frame exists at the ORF1b–ORF2 junction of CaAstV.The CaAstV ORF1b–ORF2 overlap sequence is 8 nt, as reported for other mamastroviruses, hence ORF2 is in the same frame as ORF1a.As previously reported, the capsid sequence was found to have an in-frame start codon 180 nt upstream of the start codon homologous to other mamastrovirus genomes.The 6 aa C terminus of the VP1 is highly conserved with several mammalian AstVs.This motif is within a highly conserved nucleotide stretch, s2m, overlapping the termination codon of ORF2, and has been identified in both CaAstV strains sequenced.The overall nucleotide identity between the two CaAstV strains sequenced was 88.5%.Sequence comparison of the individual ORFs is presented in Table 3.This clearly shows that ORF1b is most closely conserved, making this an ideal target for qPCR screens.Conversely, the capsid sequence is most diverse.A phylogenetic tree was constructed by multiple alignment of the full-length genome of the two CaAstV strains isolated in this study, and a number of astrovirus reference strains isolated from different mammalian species.This was achieved using MEGA6 software.CaAstV has previously been detected sporadically in dogs across the world, but the association with disease, prevalence levels and genetic diversity is largely unknown.This study presents the first identification and molecular characterisation of CaAstV cases in dogs in the UK.Sequencing of the viral capsid for all four strains revealed extensive genetic diversity and the first full-length sequences for CaAstV were determined for two strains.The prevalence of CaAstV in gastroenteritis cases in this study was shown to be 6.0%.This prevalence was unexpectedly high given that CaAstV has not previously been reported in the UK.It may be predicted that this is an underestimation of prevalence based on the population of dogs surveyed.The majority of previous CaAstV epidemiological studies have focused on dogs less than 6 months old, whereas this study included dogs of any age.Serological studies have shown that exposure to CaAstV typically occurs in young animals, with dogs older than 3 months significantly more likely to be seropositive than younger dogs.This suggests that studies focusing only on young dogs will identify more positive CaAStV cases.However, the decision to survey dogs of any age in this study enabled detection of a CaAstV case in a 7-year-old dog.This is oldest case of CaAstV reported to date and highlights the need to have an index of suspicion for infectious causes of gastroenteritis in dogs of any age.Indeed although human astrovirus is more common in paediatric populations, infections in the elderly are reported.The pathology caused by CaAstV in dogs is uncertain as CaAstV has previously been detected in the stools of both healthy and diseased dogs.However, our study shows a clear relationship between the presence of CaAstV RNA in stool samples, and the presence of clinical signs of gastroenteritis.This finding is in agreement with two previous studies from Italy and China, but is at odds with a French study which found no significant difference in CaAstV identification between diarrhoeic or healthy puppies.Determination of the specific pathology induced by CaAstV infection in dogs is often confounded in a clinical setting by co-infections with other gastroenteric pathogens.Experimental studies will be required to confirm or refute the association of CaAstV with gastroenteritis, but this study does suggest CaAstV can cause disease.Sequencing of the capsid region of each CaAstV strain identified in this study revealed significant sequence variation.This mirrors the variation previously identified within human astrovirus isolates.At present, astroviruses are named according to the species in which they are isolated, and subsequent classification is based upon serotypes; these are defined if there is a 20-fold or greater two-way cross neutralisation titre.Sequence analysis has verified this classification, with human astroviruses 1–8 having 86–100% nucleotide identity within a serotype, based on capsid sequences.The nucleotide variation within the capsid region of the four CaAstV strains was shown to be 77.1–81.1%, which strongly suggests these strains are also different serotypes.Confirmation of this requires serological analysis, but unfortunately repeated attempts to grow the CaAstV isolates identified in this study in cell culture failed.Identification of four possible CaAstV serotypes circulating in the UK alone raises questions regarding the possible origins of these strains.Phylogenetic analysis of the UK capsid strains alongside the limited number of CaAstV sequences previously listed in GenBank was unexpected.There was no clustering of the UK strains, unlike the grouping of all Chinese strains.Instead UK strains each grouped with a different CaAstV isolate from either China or Italy.With such limited sequences available it is not possible to determine whether CaAstV strains have spread globally, or independent evolution has occurred.Clearly a high rate of evolution does occur within all astroviruses however, as their RNA genome facilitates introduction of point mutations and recombination events.Given the strain diversity identified, it has been suggested that some CaAstV strains may be more pathogenic than others.This has previously been reported for astroviruses of mink, which show variation in their ability to invade the central nervous system in a strain related manner.Assessment of this risk will require wider epidemiological and clinical studies.Another concern raised by the existence of multiple circulating CaAstV strains is regarding future disease control.Management of viral causes of gastroenteritis in dogs is best achieved by widespread vaccination, exemplified by the widely used canine parvovirus vaccine.However, if CaAstV vaccine development is considered, the presence of multiple strains will make vaccine design challenging.Full genome sequencing of two CaAstV isolates revealed them to be closely related and possess a typical astrovirus organisation.The first full length sequence of an astrovirus was for human astrovirus in 1994, and relatively few full length sequences have since been determined.Sequence analysis of the CaAstV strains identifies the presence of a serine protease and VPg within ORF1a as for other astroviruses, which is separated from and conserved RdRp of ORF1b by a −1 frameshift.In summary, this study has not only identified CaAstV circulating in the UK dog population, but also found significant genetic diversity within the CaAstV strains.Furthermore, full genome sequencing of two CaAstV isolates has enabled detailed molecular characterisation of this astrovirus species, and provides the astrovirus field with further examples of genome variation.Stool samples were collected from dogs admitted to five participating veterinary clinics and a single animal shelter in counties in the south and east of England; Cambridgeshire, Kent, Lincolnshire, Middlesex and Suffolk.Ethical approval was not required for this study as all samples collected were animal waste products.With owner consent, dogs were recruited to the study if they passed stools whilst hospitalised.Stool samples were collected from dogs from the animal shelter if they passed diarrhoea.All stool samples were stored at −20 °C until and during transportation to the laboratory, where they were stored at −80 °C prior to nucleic acid extraction.As controls, stool samples were also collected from healthy dogs owned by veterinary staff at each clinic, as well as from dogs at participating boarding kennels.Basic case data was recorded for each dog from which a stool sample was collected, including age, breed, sex, reason for admission, and any recent history of enteric disease.
Astroviruses are a common cause of gastroenteritis in children worldwide. These viruses can also cause infection in a range of domestic and wild animal species. Canine astrovirus (CaAstV) was first identified in the USA, and has since been reported in dogs from Europe, the Far East and South America. We sought to determine whether CaAstV is circulating in the UK dog population, and to characterise any identified strains. Stool samples were collected from pet dogs in the UK with and without gastroenteritis, and samples were screened for CaAstV by qPCR. Four CaAstV positive samples were identified from dogs with gastroenteritis (4/67, 6.0%), whereas no samples from healthy dogs were positive (p<. 0.001). Sequencing of the capsid sequences from the four CaAstV strains found significant genetic heterogeneity, with only 80% amino acid identity between strains. The full genome sequence of two UK CaAstV strains was then determined, confirming that CaAstV conforms to the classic genome organisation of other astroviruses with ORF1a and ORF1b separated by a frameshift and ORF2 encoding the capsid protein. This is the first report describing the circulation of CaAstV in UK dogs with clinical signs of gastroenteritis, and the first description of the full-length genomes of two CaAstV strains.
447
An enhanced temperature index model for debris-covered glaciers accounting for thickness effect
Debris-covered glaciers, which are mantled in an extensive layer of debris over at least part of the ablation area, are important features of many mountainous areas of the world, from the Himalaya-Karokoram-Hindukush region to the European Alps and North-America.Since they commonly reach lower elevations than debris-free glaciers, they are important for their contribution to water resources, and play a key role for the hydrology of high elevation catchments.Nevertheless their response to climate is not fully understood yet, which hinders a sound assessment of catchment melt and runoff, but it is clear that it differs from that of debris-free glaciers.The presence of a thin debris layer enhances ablation through increased absorption of shortwave radiation at the surface, compared with bare ice, and shorter vertical distance for heat conduction, while a thick cover reduces ablation as insulation dominates over increased absorption of shortwave radiation.The point of divergence between melt enhancement and reduction by debris cover is termed the critical thickness.The value of the critical thickness has been shown to vary between locations depending on the debris properties and climatic setting.The shape of the extrapolated melt rate-debris thickness relationship has often been referred to as the Østrem curve following Østrem.In general, a debris layer is assumed to reduce ablation at the glacier scale, as extensive debris cover tends to be thicker than the critical thickness.Recent remote sensing studies, however, have provided evidence of mass losses over debris-covered glaciers as large as those over debris-free glaciers.They have thus suggested an anomalous behavior that might be explained by the presence of supra-glacial features such as ice cliffs and lakes that develop over debris-covered glaciers and absorb heat considerably, favouring mass losses.The evidence is limited to a very recent period and has been obtained only through remote sensing estimates of glacier mass balances, and never through numerical modelling at the glacier scale, and might therefore need further investigation.Despite this evidence at the glacier scale, however, it is clear that, at small scales, a layer of debris over ice reduces melt starting from few centimetres.For calculations of melt rate under debris, two types of approaches have been commonly applied.On one side, physically-based energy balance models calculate the exchange of energy between the debris layer and the atmosphere on top, and ice melt at the bottom of the debris is computed as the heat transferred at the interface between ice and debris, often assuming that the ice is at melting point.This type of approach requires numerous input meteorological variables as well as surface variables such as surface roughness, albedo and debris water content.On the other side, at the catchment scale and in particular in data scarce regions, melt under debris has been calculated with empirical models after recalibration of their parameters for debris conditions.In general, smaller values of the empirical melt parameters are used for debris than for clean ice, to reproduce the assumed average reducing effect of debris over melt.While the application of energy balance models is constrained by data availability, which are either not available in many areas or difficult to extrapolate or model, the latter approach has the disadvantage that it prescribes a constant in space reduction of melt.In reality, different melt rates are associated with different debris thickness, a fact nicely summarised in the Østrem curve, and spatial variability of debris thickness is common on debris-covered glaciers.This spatial variability is neglected in empirical models and can lead to erroneous simulations of total melt at the glacier scale.In this paper, we suggest a new approach for calculations of melt rates under debris that retains the limited amount of input data typical of temperature index models but introduces a parameterisation to account for the effect of debris thickness.We build upon the enhanced temperature index model developed for calculation of melt over debris-free ice by Pellicciotti et al. and Carenzo et al. and used in numerous other applications and modify that model to account for varying debris thickness.We therefore suggest an approach that is intermediate between empirical methods and full energy balance models.To develop the new model we use melt rates simulated with a debris energy balance model, and calibrate the new model empirical parameters against the EB simulations.As reference, we use the debris EB model developed by Reid and Brock using data from Miage Glacier, Italian Alps.We use the same Miage data sets also for the development of the new Debris Enhanced Temperature Index model, and test the model developed in this way with meteorological and ablation data collected at one Automatic Weather Station over a debris-covered section of Haut Glacier d’Arolla, Switzerland.This study is undertaken on two different glaciers, Miage Glacier and Haut Glacier d’Arolla.Miage Glacier is a heavily debris-covered glacier located in northwest Italy.Haut Glacier d’Arolla, located in the southern part of Switzerland, is mainly debris-free but is experiencing an increase of debris cover over bare ice surface.Brock et al. provide an extensive description of the data collected on Miage Glacier, whereas for the data from Haut Glacier d’Arolla the reader is referred to Reid et al. and Carenzo.This study is carried out at the point scale and it uses data collected at two Automatic Weather Stations during the 2009 and 2010 ablation seasons on Miage Glacier and Haut Glacier d’Arolla, respectively.Data collected during five additional ablation seasons on Miage Glacier are also used to investigate the model transferability in time.A detailed description of these data sets can be found in Reid and Brock.The AWS located on Miage was installed on a 23 cm debris layer, whereas debris thickness measured at a stake close to the AWS location in Haut Glacier d’Arolla was 6 cm and it is assumed to be the value at AWS.For this study, we apply on Miage Glacier the same parameter set as Reid and Brock.On Haut Glacier d’Arolla, in absence of site specific parameters, we use the same values as for Miage Glacier as assumed in Reid et al.Debris properties are assumed constant in time.The model presented in this study is a modification of the enhanced temperature index model of Pellicciotti et al., in which melt was calculated as a sum of the full shortwave radiation balance and of a temperature dependent term.We use the same model but modify it to include the dependency of melt rates on debris thickness.The approach to derive the new model is as follows: we first run the energy balance model by Reid and Brock and evaluate it against surface temperature records at the AWSs on Miage and Haut Glacier d’Arolla.We then use it as reference to develop, calibrate and validate the new DETI model, since stake readings are too coarse a data set for univocal parameter calibration.Finally, we compare the results obtained with the new DETI model to melt simulations obtained with the ETI model with parameters recalibrated for debris conditions, to assess the performance of the new model in comparison to the more traditional empirical method of melt calculations under debris.The point scale debris energy balance model developed by Reid and Brock is used as reference for the calibration and validation of the DETI model.A detailed description of the DEB model can be found in Reid and Brock.Here we only report the main model features.The DEB point model outputs were validated against surface temperature measurements at Miage Glacier during the 2005, 2006 and 2007 ablation seasons for details).The DEB model cannot replicate the reduction in melt rate for very thin debris that is suggested by the Østrem curve, for reasons discussed extensively in Reid and Brock.While it is clear that the melt rate increases for thin debris layers, no EB model at yet has provided evidence that it reaches a maximum and then decreases towards the bare-ice melt rate as the debris thickness tends towards zero.This effect was obtained only by Reid and Brock using a patchy debris scheme, and more recently by Evatt et al. by incorporating debris layer air flow.These promising additional schemes need testing and more experimental evidence, and for the development of the DETI model we thus use the original DEB model of Reid and Brock.As a result, the DETI model will suffer from the same limitations as the DEB model for thin debris, and will be used only to study the reducing effect of thick debris on melt rates.In previous works, TF and SRF were adjusted for melt under debris and recalibrated against stakes readings or EB simulations.Similar approaches have been adopted by e.g. Immerzeel et al.However, the accuracy and transferability of this approach is limited by the lack of a term representative of the debris thickness feedback.The parameter calibration can lead to an improvement in the melt rate computation for a specific debris thickness value, but it can not reproduce the behaviour suggested by Østrem.Reid and Brock evaluated the DEB model at the debris-covered Miage Glacier for the 2005, 2006 and 2007 ablation seasons.Therefore in this paper we only validate the results of the DEB model for the new ablation season at Miage Glacier, and the new study site, Haut Glacier d’Arolla.The model is validated by comparing the mean daily cycles of measured and modelled debris surface temperature, following Reid and Brock.Measurements of surface temperature from radiometers, obtained from records of outgoing longwave radiation by inverting Stefan-Boltzman relationship, can have significant uncertainty due to sample bias on a highly variable field of surface temperature.We used a CNR1 net radiometer that was installed at 2m above the surface.Thus, 99% of the input to the lower sensor came from a circular area with a radius of 20 m.In this area, debris thickness was not constant at the value of 6 cm measured at the stake in proximity of the AWS, but varied significantly so that the field of view of the radiometer very likely incorporated areas of varying debris thickness, and of thinner debris in particular.To account for this, we compare the observations to the modelled values with 6 cm thickness as well as with those obtained by varying by ±3 cm around 6 cm, which should represent some of the variations observed in the debris thickness in the area.The effect of varying debris thickness on the variability of surface temperature is particularly strong for thin debris, so that we expect the heterogeneity of the debris layer to be more important at Haut Glacier d’Arolla than at Miage Glacier.The Østrem curve is built by running the DEB model using the meteorological forcing at the AWS on Miage Glacier during the six ablation seasons and varying the debris thickness from 0.1 to 50 cm.The ablation stake readings during the 2005 ablation season are also included in Fig. 3.The results show a relatively consistent behaviour and similar melt rate values over the six years investigated.Thick debris layers produce low melt, whereas melt rates increase when debris becomes thinner, following the general prescribed behaviour.Differences among seasons are small compared to the effect of thickness, suggesting that the meteorological forcing is less important to melt variations than debris thickness, particularly in the case of thick debris layers.The Østrem curve obtained by forcing an EB model with meteorological variables collected at one site and varying debris thickness is a theoretical exercise, as meteorological variables such as air temperature or the atmospheric boundary layer can vary with thickness.By assuming the same time series of atmospheric forcing, such additional debris effects at the interface with the atmosphere are not taken into account.Moreover, very thin debris cover that dramatically enhances melt is very unlikely to be found over large areas.Thin debris is generally spread out and patchy, with some areas exposed to bare ice that reduce the overall effective ablation of the area.Thus, the behavior of the simulated curve for very thin debris should be closer to the bare ice melt rate, as suggested by the original Østrem curve.The relationship between melt rates and the main atmospheric forcing is investigated by comparing the mean daily cycle of air temperature and incoming shortwave radiation to the melt rate cycle simulated by the DEB model.On Miage Glacier a lag between air temperature and incoming shortwave radiation with melt is evident.In particular, a clear shift between the peaks of the two cycles is visible in Fig. 4.The lag represents the time needed for the energy transfer through the debris layer, and is proportional to the debris thickness, in agreement with Fourier law of heat conduction.A higher lag corresponds to a thicker debris layer.The two main aspects emerging from analysis of the DEB simulations and discussed in this section are thus: 1) Melt rate decreases with the increase of debris thickness, and 2) the lag between the peaks of the daily cycles of air temperature and shortwave radiation versus melt rate increases with the increase of debris thickness.These are the two features that we attempt to incorporate into the DETI model.Table 1 lists the recalibrated DETI parameters obtained for each debris thickness and the corresponding statistical performance.The model performance as represented by the NSE is in general very high.It is lower for the two highest values of debris thickness, going from 0.937 at 0.3 m to 0.875 at 0.4 m and 0.624 at 0.50 m.This is due to the fact that the NSE is lower for low numerical values of the target variables, and low values of melt rates are typical of higher debris thickness.The NSE is a normalised measure that compares the mean square error generated by a model simulation to the variance of the observed variable time series, and is thus higher for cases where the variability in the time series of the target variable is higher.The low NSE values corresponding to the two thickest debris do not necessary indicate a lower performance, as pointed by the low values of the RMSE corresponding to these two cases.Lag parameters for air temperature and incoming shortwave radiation assume generally the same value.The small differences are due to the fact that the diurnal cycle of air temperature is slightly delayed compared to the incoming shortwave radiation one.In light of the results shown in Table 1 and in order to reduce the number of parameters, lagT and lagI are condensed in a single term.This assumption leads only to a slight reduction of the DETI model performance, which is considered acceptable in view of the gained computational benefits.lag, TF and SRF are then expressed as a function of debris thickness.The debris thickness feedback implies that lag, TF and SRF are variables.Their relationship with debris thickness is investigated in Fig. 7.lag shows a remarkably linear behaviour with debris thickness and is approximated with a linear regression with slope lag1 and intercept lag2.Two parameters are thus included in the DETI model and the model calibration leads to lag1 = 21.54 and lag2 = –1.193.TF and SRF also decrease with debris thickness due to the decrease of melt associated with thicker debris layers.However, their behaviour is not linear and we use a different function to describe the two relationships.This choice is justified by the different effect on melt rates and relation to debris thickness of the the two variables and associated energy contributions.Incoming shortwave radiation has a daily cycle and energy gained during the day is given back to the atmosphere at night enabling a decoupling of the debris surface energy balance from the ice-debris interface for thick debris.On the other hand, so long as temperature is positive it can always contribute energy to the debris-ice interface, thus justifying different functional forms.We tried different functions and used those with the best fit to the data.As a result, TF varies with debris thickness assuming a power law, whereas an exponential decrease is adopted for SRF.The model calibration leads to TF1 = 0.016 , TF2 = –0.621, SRF1 = 0.0079, and SRF2 = –11.21.Fig. 8 shows the comparison between the Østrem curve obtained by the DETI model and the one simulated by the DEB model on Miage Glacier during the 2005 ablation season.The two curves present a similar behaviour.Higher discrepancies occur for thin debris layers, when the DETI model slightly overestimates the mean daily melt rate.The models do not replicate the reduction in melt rate for thin debris above the critical thickness.In order to investigate further the DETI model performance, the mean daily cycle of melt rate simulated by the new empirical debris model is compared to the one obtained using the DEB model for varying thicknesses.For thin debris layers, the DETI model tends to slightly overestimate the melt rates, especially during the night.For thicker debris, the two mean daily cycles are very close.Overall, the DETI model performance is high and the model can reproduce the decrease of melt caused by the increase of debris thickness.The lag factor accounting for the energy transfer through the debris layer produces a substantial improvement in comparison to the results obtained with a more classical empirical model.The increase in model performance obtained with the new model is assessed by comparing it to results from the ETI model calibrated for debris conditions at the AWS on Miage Glacier.Both models are compared to the DEB model outputs on Miage Glacier during the 2005 ablation season, which was the season that allowed the best validation because of the numerous ablation stake readings.The ETI model is also calibrated against hourly melt rates computed by the DEB model.Despite the parameter recalibration, the ETI model is not able to correctly reproduce the mean daily cycle of melt rate, as it overestimates low melt rates and underestimates high melt rates.A sum of the two errors might result in daily melt rates similar to the observed ones, but these result from compensation of errors and not accurate simulations.The DETI approach, on the other side, can clearly reproduce the reference mean daily cycle of melt rate.Some discrepancies occur for the low melt rates during the nighttime and at the beginning of the day, but the increase in performance over the ETI is signifcant.Thus, despite being characterized by a higher number of parameters, the new formulation seems more appropriate for calculations of melt rates under debris.The model transferability in time is assessed by applying the DETI model to five other ablation seasons, namely 2006, 2007, 2009, 2010 and 2011, on Miage Glacier.The parameter set calibrated for the 2005 ablation season is transferred as such to the other five seasons.Table 2 summarizes the Nash and Sutcliffe efficiency criteria obtained by comparing the hourly melt rates simulated by the DETI model to those computed by the DEB model.As observed in 2005, the DETI model performance is good for debris thickness ranging from 0.05 m to 0.40 m, but the NSE becomes lower than 0.7 for debris thickness equal to 0.5 m because of the lower actual numerical values of melt.The RMSE values however indicate that the actual difference between model and observations is low.In general, the agreement tends to decrease with increasing debris thickness, but this error is of lesser importance since for these debris thicknesses melt is very low.A lower model performance is obtained during the 2009 ablation season for debris thickness equal to 0.5 m.The 2009 summer was a particularly warm season.The DETI model transferability in space is evaluated in terms of scatterplots of hourly melt rates and mean daily cycles of melt rate at the AWS on Haut Glacier d’Arolla in 2010.Table 2 shows the NSE and RMSE calculated comparing the DETI outputs against the DEB simulations.Values of the NSE are of the same magnitude as those for Miage, except for the thinner debris layers, for which the performance is slightly lower in Arolla.On the other hand, the RMSEs are in general lower in Arolla than in Miage, suggesting smaller absolute differences between the two models.It is difficult to explain the lower NSE associated with thinner debris, but we notice that the same model features are evident in the Miage simulations, thus suggesting a consistency in model behaviour.A possible explanation for the overestimation of melt during the day for thin debris might be found in the values of the curve fitted to the optimised parameters, which slightly overestimates both TF and SRF for d ≤ 10 cm.Higher parameters would result in higher melt simulations when both Ts and I are high, i.e. during the day hours.Another possible reason for the overestimation of melt rates during the early morning and peak hours could be that the model parameters are constant over the day, while the energy fluxes are highly variable.While the variability of the shortwave radiation flux is explicitly included in the melt equation, the diurnal changes of all other fluxes are lumped together in one temperature-dependent term where a constant TF multiplies air temperature.The DETI model lacks an explicit representation of the strongly varying sensible heat fluxes, and thus misses a negative term during the day that cannot be accounted for entirely by the calibrated TF as this lumps together also all other temperature dependent fluxes.This could justify the overestimation of melt rates during the day, but it is not clear why this effect would be evident for thin debris only.The correlation coefficients in Fig. 11, ranging from 0.969 to 0.879, also suggest good agreement between the hourly melt rates simulated by the DEB model and those modelled by the DETI, for debris thicknesses varying from 0.05 to 0.5 m, and confirm the overestimation of high melt rates for thin debris apparent in Fig. 12.Overall, the DETI model performance at the validation site of Haut Glacier d’Arolla seems comparable to that at Miage Glacier, thus supporting the model transferability in space, at least for sites in the same broad climatic and geographic setting.The agreement between the model outputs obtained with the new empirical approach and those simulated by the reference DEB model thus remains good also when no parameter recalibration is conducted.However, the robustness of the new empirical parameters should be tested at other sites and related to debris properties, which can differ substantially for different materials and climatic conditions.As indicated above, a limitation of the model might be evident when the the energy fluxes that are represented in the lumped temperature-dependent term have different signs, or opposite patterns during the day or the season.These cannot likely be captured by a simplified term where the temporal variability is prescribed only by the variation of air temperature.In such cases, a more physically based DEB model might be preferred.Locations with high debris moisture content might also not be appropriate for the application of the model without recalibration, because its parameters were calibrated for the relatively dry conditions of ablation seasons in the European Alps, where the latent heat flux is of minor significance.In this paper, we present a new temperature-index model accounting for the debris thickness feedback in the computation of melt rates at the debris-ice interface.The model empirical parameters are expressed as a function of debris thickness and optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model.The latter is validated against ablation stake readings and surface temperature measurements.Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization.We compare this approach to a simple ETI model with empirical parameters recalibrated for debris conditions.This model is not able to reproduce correctly the mean daily cycle of melt, severely underestimating the higher melt rates and overestimating the lower ones.The introduction of the lag parameter in the DETI model, by accounting for the time taken for heat transfer through debris, leads to a significant improvement in the model performance.The performance of the new DETI model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based DEB model, thanks to the definition of model parameters as a function of debris thickness.The model simulates the descending limb of the Østrem curve, whereas is not able to reproduce the melt enhancement at very thin debris thicknesses, a limitation that it shares with the original DEBI model.Both models could only be applied to thin debris using a patchy debris scheme as in Reid and Brock, or by including evaporative fluxes within the debris layer, as in Evatt et al., which however is beyond the scope of this paper, but it surely should be investigated in future work.The drawback of this approach is that it requires numerous empirical parameters that need calibration.We have shown however that they seem to be relatively stable in time at the same site and transferable in space from Miage Glacier to Haut Glacier d’Arolla in Switzerland.The two sites are in the same broad geographic and climatic setting of the European Alps, at a relative close distance and this transferability in space should thus be further investigated at other sites, both in the same region and in distinct mountainous areas such as the Andes or Alps.This task might be difficult due to lack of observations of both meteorological and surface variables as well as ablation rates from debris-covered sites, but it seems imperative to strengthen the model physical basis.Application of the new DETI model requires estimates of debris thickness and its variability in space over glaciers, something that has been lacking due to the difficulties of direct measurements in the fields and lack of calculation methods.Recently, however, progress has been made in estimating debris thickness from satellite thermal imagery.The methods suggested are based on the inversion of the energy balance at the debris surface and knowledge of surface temperature from the satellite thermal imagery, thus solving for debris thickness as only unknown, if the input meteorological forcing to the site is known.The main uncertainty in these approaches to date is related to the non-linear profile of temperature within the debris, which causes different images to result in different thicknesses for the same site.Clear progress however has been made from the first attempts, so that there is potential that accurate maps of debris thickness can be obtained in the near future.Combination of debris thickness distribution derived from satellite data and the DETI model could thus be applied to remote glaciers to provide improved estimates of melt in comparison to previous first order approximations calculated assuming constant thickness.Its main advantage is its limited data requirement, which makes it a novel approach that can be included in continuous mass balance models of debris-covered glaciers for long term past and future simulations.
Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the Østrem curve. Its large number of parameters might be a limitation, but we show that the model is transferable in time and space to a second glacier with little loss of performance. We thus suggest that the new DETI model can be included in continuous mass balance models of debris-covered glaciers, because of its limited data requirements. As such, we expect its application to lead to an improvement in simulations of the debris-covered glacier response to climate in comparison with models that simply recalibrate empirical parameters to prescribe a constant across glacier reduction in melt.
448
Molecular Basis for ATP-Hydrolysis-Driven DNA Translocation by the CMG Helicase of the Eukaryotic Replisome
Chromosome duplication is catalyzed by the replisome, a multi-subunit complex that combines DNA unwinding by the replicative helicase and synthesis by dedicated polymerases.During eukaryotic replication, the helicase function is provided by the Cdc45-MCM-Go-Ichi-Ni-San assembly comprising Cdc45, GINS, and a hetero-hexameric motor known as the MCM complex.MCM belongs to the superfamily of AAA+ ATPases, which contain bipartite-active sites with catalytic residues contributed by neighboring subunits.The process of CMG formation is best understood in budding yeast, mainly due to in vitro reconstitution studies.During the G1 phase of the cell cycle, MCM is loaded as an inactive double hexamer around duplex DNA.The switch into S phase promotes the recruitment of Cdc45 and GINS, promoting origin DNA untwisting by half a turn of the double helix.Recruitment of the firing factor Mcm10 leads to replication fork establishment, which involves three concomitant events, including activation of the ATP hydrolysis function of MCM, unwinding of one additional turn of the double helix, and ejection of the lagging strand template.How CMG activation promotes eviction of the lagging strand template from the MCM pore is unclear, although it is known that extensive DNA unwinding requires replication protein A.The isolated CMG is a relatively slow helicase, yet cellular rates of DNA replication can be achieved in vitro in the presence of fork-stabilization factors Csm3-Tof1 and Mrc1.Despite these advances, a complete understanding of DNA fork unwinding and of fast and efficient replisome progression is still lacking.Mechanistic models for helicase translocation have been proposed in the past, based on streamlined systems.For example, crystallographic and cryo-electron microscopy work on substrate-bound homo-hexameric ring-shaped helicases help explain how nucleic acid engagement can be modulated by the nucleotide state around the six nucleoside triphosphate hydrolysis centers.In most structures, five subunits form a right-handed staircase around the nucleic acid substrate and a sixth protomer is found disengaged from the spiral.By morphing between six rotated copies of the same structure, a translocation mechanism can be proposed whereby sequential cycles of nucleotide binding, hydrolysis, and product release drive the successive movement of neighboring subunits.According to the rotary model, translocation occurs with a hand-over-hand mechanism where each subunit engages, escorts, and disengages from DNA, cycling from one end of the staircase to the other.Two variations of rotary cycling have been proposed, either based on a closed planar ring, where the protein staircase is formed by nucleic-acid-interacting pore loops or based on non-planar rings with entire subunits arranged in a helical structure around DNA.Homo-hexameric helicases, however, cannot be used to formally prove hand-over-hand rotary translocation, as they lack asymmetric features that would allow tracking the DNA with respect to the individual protomers within the ring.Due to its inherent asymmetry, the hetero-hexameric MCM motor seems like an ideal tool to study translocation.Gaining a molecular understanding of DNA unwinding with this system, however, has proven challenging.Promiscuous modes of DNA binding have in fact been observed for MCM, as it is capable of engaging either leading or lagging strands inside its central pore.In particular, it is established that yeast CMG, which translocates in a 3′ to 5′ direction, unwinds DNA by threading single-stranded DNA 5′ to 3′, N to C terminal through the MCM central channel.However, the inactive ADP-bound MCM double hexamer engaged to duplex DNA contains ATPase protomers that interact with both DNA strands, with Mcm7/4/6/2 binding the lagging strand, and Mcm5/3 touching the leading strand.When bound to a slowly hydrolysable ATP analog, Drosophila CMG has been observed to contact single-stranded DNA using the ATPase elements of Mcm7/4/6 and, in a different experiment, engage a primer template junction with duplex DNA facing C-terminal MCM.Attempts to image ATP-powered translocation in the active CMG thus far have only shown evidence for one mode of single-stranded DNA engagement around the MCM ring, with duplex DNA on the N-terminal side and single-stranded DNA engaged by the Mcm6/2/5/3 ATPase.However, additional rotational states in the ATP-cycling MCM complex have not been observed.One further complication to understanding nucleotide-powered DNA translocation in the CMG is that not all AAA+ sites contribute equally to unwinding.The ATP binding function of certain ATPase centers in the MCM hetero-hexamer can be inactivated with minimal effects on DNA unwinding in vitro.Conversely, the Walker A motif in the Mcm2 and Mcm5 ATPase is essential.This indicates that the MCM ring is functionally asymmetric, making a strictly sequential rotary cycling mechanism hard to envisage for the eukaryotic helicase.These facts, combined with the observation that the CMG has a dynamic ATPase ring gate between Mcm2 and Mcm5, have led to an alternative translocation model to rotary cycling, whereby the helicase inchworms along DNA.According to this inchworm, or “pump jack” model, vertical DNA movement is driven by a spiral-to-planar transition in the ATPase domain, with a gap opening and sealing at the Mcm2/5 interface to provide the power stroke for unwinding.However, a DNA-engaged open spiral CMG with a Mcm2-5 gap has yet to be observed, and the mechanics of ATP-hydrolysis-driven helicase progression remains to be established.To elucidate the mechanism of DNA translocation by the eukaryotic replicative helicase, we have built on DNA fork-affinity purification methods to isolate the CMG helicase undergoing extensive ATP-hydrolysis-powered fork unwinding.By using cryo-EM imaging and single-particle reconstruction, we identify four distinct ATPase states in the Drosophila CMG, corresponding to four modes of DNA binding.By interpolating between our structures, we can generate a model whereby vertical movement of single-stranded DNA from N- to C-terminal MCM occurs by ATPase-powered subunit movements that progress around the MCM ring.To inform this model further, we introduced single amino acid changes in the so-called arginine finger residue of the six CMG catalytic centers.These alterations are known to impair ATP hydrolysis but not binding.By probing the DNA helicase activity of these six CMG variants, we validate our DNA translocation mechanism.This model presents both similarities and differences with the rotary-cycling mechanism proposed for planar homo-hexameric helicases such as E1 and Rho; in particular, we find that parental duplex DNA engagement at the N-terminal front of the helicase is reconfigured in different ATPase states of the CMG.Two-dimensional EM analysis on a larger yeast replisome complex containing Mrc1-Csm3-Tof1 indicates that these factors directly associate with duplex DNA at the front of the CMG helicase.This interaction could help strengthen the coupling between single-stranded DNA translocation and the unwinding of duplex DNA at the replication fork.To understand the molecular basis of DNA translocation by the eukaryotic replicative helicase, we focused on the structure and dynamics of CMG complexes during fork unwinding.The polarity of helicase movement and the ability to bypass roadblocks on the DNA substrate have been the subject of debate in recent years, with suggestions that CMG helicases from different species might move in opposite directions.However, it is now established that active translocation of CMG in an in vitro reconstituted assay must satisfy three criteria: the N-terminal side of the MCM ring should face the fork nexus, the helicase should stall upon encountering a stable protein roadblock on the leading strand template, and the helicase should bypass a protein roadblock on the lagging strand template.To isolate the DNA-engaged form of the helicase, we used desthiobiotinylated DNA forks bound to streptavidin-coated magnetic beads as bait to capture purified recombinant budding yeast CMG particles.Binding was performed in the presence of ATPγS, a slowly hydrolysable ATP analog that promotes DNA engagement but not unwinding.To monitor translocation and retain particles on the DNA, we introduced two HpaII methyltransferases roadblocks on the duplex DNA, covalently linked to the translocation strand, spaced 33 and 48 base pairs from the fork nexus, respectively.Using two MH adducts provides a distinguishing “double dot” feature that can serve as a fiducial for single-particle EM.When ATPγS was supplemented in the biotin elution buffer, we obtained CMG averages with the N-terminal side facing the MH cross-linked duplex DNA, recapitulating the polarity of DNA fork engagement observed previously for yeast CMG.This finding indicates that our fork-affinity purification method selects yeast CMG particles with a substrate engagement mode previously shown to be productive for helicase translocation.Under these conditions, helicase assemblies are spaced from the MH by 10-15 nm and seemingly oscillate with respect to the MCM pore.When biotin elution was instead performed with an ATP buffer to promote translocation, 100% of yeast ATP-CMG particles with visible DNA roadblocks were found in contact with the MH.The CMG helicase stopped by a MH adduct on the DNA is an interesting target in and of itself for future investigation.We reasoned that DNA engagement for a helicase stalled at a roadblock might not reflect a bone fide translocation mode of MCM-DNA engagement.To circumvent this problem, we switched to Drosophila CMG.Magnetic tweezers analysis has revealed that single CMG molecules can advance, pause, slide backward, and advance again when bound to a DNA fork, increasing our chances of capturing not only stalled but also translocating forms of the CMG.By reconstructing distinct DNA binding modes and correlating them to unique ATP-occupancy states, we set out to establish the mechanism of helicase translocation by the CMG.Similar to the experiments with yeast CMG, EM on Drosophila ATPγS-CMG revealed MH roadblocks spaced by 10–15 nm from the N-terminal MCM pore.When biotin elution was instead performed with ATP buffer that promotes translocation, Drosophila CMG particles were found at a variety of distances, closer to the roadblock, while the mobility of the methyltransferase pointers was significantly constrained.This configuration is compatible with fork-nexus engagement and DNA unwinding by the advancing CMG helicase.The observation that both yeast and Drosophila CMG translocate on DNA with the N-terminal MCM face first is important, as it settles a long-standing controversy in the field.Because ATP-CMG showed proximity to but not direct contact with the leading-strand roadblock, we deemed Drosophila CMG a suitable target for high-resolution structural analysis of ATP-hydrolysis-driven translocation.Direct EM evidence of CMG translocation using DNA with covalently linked methyltransferase roadblocks/fiducials.While CMG only loosely engages the DNA fork when incubated with ATPγS, it translocates toward the methyltransferase roadblocks when the nucleotide is switched to ATP.Duplex DNA movement, tracked by visualizing methyltransferase, is restrained when the CMG is unwinding DNA.To confirm that the CMG helicase can engage in vigorous unwinding in our experimental conditions, we sought to visualize the bypass of a roadblock on the lagging-strand template.To this end we repeated the DNA-affinity purification and EM experiments using a fork with the first MH roadblock cross-linked to the lagging strand, 29 bp downstream of the nexus and 15 bp upstream of a leading-strand roadblock.The resulting 2D averages derived from sample eluted in ATP buffer yielded one lone methyltransferase adduct proximal to the CMG complex, as would be expected for a helicase that has translocated past the lagging-strand block, but not the downstream leading-strand adduct.Our data match the observation that the CMG helicase is not halted by a covalent methyltransferase roadblock on the lagging-strand template.In summary, visual analysis of our helicase-DNA complex satisfies criteria that constitute processive DNA unwinding by the CMG, including lagging-strand but not leading-strand roadblock bypass.Furthermore, we show for the first time that Drosophila CMG, like yeast, translocates with the N-terminal tier of MCM first.Promiscuous polarity of DNA binding has implications for the mechanism of replication fork establishment, addressed in the Discussion section.To understand DNA binding by the CMG helicase during translocation, we collected cryo-EM data of the Drosophila CMG-DNA-MH complex in conditions that promote fork unwinding.Following 3D classification and refinement approaches, we first focused our analysis on the most populated structural state.As previously described for the yeast CMG imaged on a pre-formed fork in ATP buffer, we could clearly visualize duplex DNA entering a B-domain antechamber on the N-terminal side of the MCM ring pore.Initial reconstruction efforts led to cryo-EM maps with disconnected duplex to single-stranded DNA density.However, signal subtraction of the ATPase domain followed by 3D classification allowed us to visualize GINS-Cdc45 and N-terminal MCM encircling duplex DNA, which now appeared connected to a single-stranded DNA density in the pore of the subtracted ATPase.This exercise allows us to draw two conclusions.First, because information on DNA polarity can be extracted from mapping major and minor grooves in the double helix, we can tell that the single-stranded feature in the ATPase channel corresponds to the leading-strand template, with 3′ facing the MCM C terminus.Second, because signal subtraction of the ATPase tier improved residual particle averaging and hence the DNA map, we infer that the subtracted domain must undergo significant structural changes with respect to DNA.To understand how conformational transitions in the ATPase ring modulate DNA engagement inside the MCM cavity, we performed extensive three-dimensional classification on the non-subtracted particle dataset.These efforts led us to identify a CMG-DNA “state 2”.The MCM architectures in states 1 and 2 differ drastically, as the ATPase shifts en bloc with respect to the N-terminal tier, translating by 8 Å in a direction perpendicular to the MCM pore.To identify any additional states within the two major 3D classes, we performed further classification focused on the ATPase domains.This analysis led us to identify four different DNA engagement modes of the CMG, with global resolutions ranging from 3.7 to 4.5 Å.In these structures, DNA binding involves previously described ATPase pore loops named the PreSensor 1 hairpin and helix 2 insertion.In three of these states, ATPase pore loops from a set of four neighboring protomers form a right-handed staircase spiraling around single-stranded DNA.These protomers are Mcm6/2/5/3, Mcm2/6/4/7 and Mcm6/4/7/3, respectively.In a fourth state only three protomers contact single-stranded DNA.The resolution of our maps is sufficient to visualize phosphate bumps in the DNA backbones, allowing us to count contacts of two nucleotides per protomer along the single-stranded DNA stretch.In state 1A, Mcm5 is found at the C-terminal end of the ATPase staircase but does not use the PS1h or h2i motifs to contact DNA.Mcm2 appears detached from the ATPase staircase in this state, poised midway between the top and bottom of the ATPase spiral.Following nomenclature proposed for other hexameric ATPases, we refer to this disengaged ATPase domain as a “seam subunit”.State 1B appears to be virtually identical to state 1A except that the Mcm6 PS1h does not contact DNA.This configuration might reflect a corrupted state in which the DNA became disengaged during sample handling or freezing, or could represent a physiologically relevant form of the helicase.In states 2A and 2B, two neighboring AAA+ modules appear detached from the staircase.With our results, we provide the first direct evidence that, under conditions that promote DNA translocation, the leading strand template can touch different ATPase protomers around the hetero-hexameric MCM ring.Our observations suggest a model whereby pore-loop staircase formation and DNA engagement correlate with nucleotide state in the six ATPase centers, similar to the mechanism proposed from structural studies on homo-hexameric helicases.To corroborate this model, we inspected the six ATPase sites at inter-protomer interfaces in our four states, seeking to identify ATP- and ADP-bound centers.In assigning the ATPase state, we considered three properties, including the openness of the ATPase interface, the active-site geometry, and the cryo-EM map in the ATPase pocket.The openness of the ATPase interface was assessed by measuring the solvent-excluded surface area between protomers, or between nucleotide and the protomer that provides an arginine finger.Since the resolution in the four structures was not always sufficient to assign rotamers, active-site geometry was assessed by measuring the distance between the β carbon of the Walker-A lysine and either the sensor 3 histidine β carbon, or the Arg Finger β carbon.A complete list of these measurements is reported in Table S2.Our analysis highlights a correlation between tight inter-protomer interaction, increased proximity between ATPase site elements, and ATP occupancy.Protomers that engage DNA are generally ATP bound at the N-terminal, 5′-interacting end of the AAA+ spiral and between central ATPase domains.In contrast, protomers at the C-terminal 3′-interacting end of the AAA+ spiral are ADP bound.This pattern is compatible with the previously proposed rotary, hand-over-hand model, suggesting that ATP hydrolysis occurs within the last competent ATPase site at the C-terminal end of the AAA+ staircase.In three of our four structures, ATP-bound protomers are all tightly interacting and staircase engaged, and flanked by two or three ADP-bound protomers.One exception is represented by state 2A, which contains two seam subunits engaged in a tight AAA+ interaction around an ATP molecule.Unexpectedly, ATP-Mcm3 contacts DNA through the h2i pore loop, which projects toward the incoming DNA and the N-terminal side of the MCM ring.We note that the DNA in states 2B and 2A has a register shift of one subunit, resulting in vertical DNA repositioning.In ADP-Mcm3 of state 2B, PS1h touches the 3′ end of DNA at the C-terminal end of the staircase while ATP-Mcm3 touches the 5′ end of DNA.Conversely, no direct DNA interaction can be detected for ADP-bound seam subunits in states 1A, 1B, and 2B.Thus, MCM-DNA interactions occur asymmetrically around the ring.In contrast to previously proposed hand-over-hand translocation models, DNA binding around the MCM ring appears asymmetric, which could explain the asymmetry in ATPase site requirements for different hexamer interfaces.Previous biochemical work on Drosophila CMG established that several Walker-A elements in the MCM hexamer tolerate a KA-inactivating amino acid change that is known to impair ATP binding.To establish whether the functional asymmetry extends to the ATP hydrolysis function, we generated six variants of the Drosophila CMG complex.These variants contain an RA substitution in the Arg Finger, which is known to impair ATP hydrolysis but not binding.Helicase assays using these mutated CMG complexes indicate that ATP hydrolysis at the Mcm3-7 ATPase site is essential for unwinding.Conversely, impairing ATP hydrolysis at the Mcm4-6 and Mcm7-4 sites has only a minor effect on unwinding.Finally, Arg Finger substitutions at the Mcm6-2, -2-5, and -5-3 sites only have intermediate effects.Notably, not all sites that strictly require ATP binding also require hydrolysis.In the Discussion section we elaborate on the correlation between structural and functional asymmetry in MCM ATPase, which together explain key features of the DNA unwinding mechanism.The structures determined here not only show a correlation between ATP binding and single-stranded DNA engagement, but also changes in the fork-nexus interaction with N-terminal MCM depending on ATPase state.In fact, in states 1A and 1B, prominent duplex DNA density is visible, primarily interacting with the B domains of Mcm5, -6, and -4.Conversely, in state 2B, duplex DNA can only be seen to interact with stabilizing helicase elements such as the B domain of Mcm5, whereas in state 2A, the incoming duplex DNA is not well resolved.Two-dimensional averages of side views confirm that states 2A and 2B indeed show dynamic duplex-DNA engagement, explaining the weak density protruding from N-terminal MCM in the averaged cryo-EM volume.The observation that duplex DNA faces N-terminal MCM in states 2A and 2B is important because it rules out the possibility that single-stranded DNA interaction as observed in state 2 might result from binding to DNA with inverted polarity, as previously suggested.Likewise, although the single-stranded/duplex DNA junction is not resolved in the states 2A and 2B, it is clearly visible in states 1A and 1B.Interestingly, cryoSPARC refinement of a dataset with state 1-like DNA engagement reveals clear density for the lagging strand DNA, departing from the DNA junction split by N-terminal beta hairpin of Mcm7 and threading through an opening between the Mcm5/3 B domains.Thus, DNA can make a ∼90° kink that positions the excluded strand near the front of the helicase.This observation is in striking agreement with a speculative model of the replication fork structure proposed by O’Donnell and Li, based on the analysis of the B domain architecture in the MCM ring.The lagging-strand density feature is notable as it has not been seen in previous Drosophila or yeast CMG assemblies.We also note that the lagging-strand template is positioned in a region of the replisome complex that is known to be occupied by Pol alpha, possibly facilitating Okazaki fragment priming and parental histone redepositioning.Finally, the narrow passage between Mcm3/5 B domains provides an escape route for the lagging-strand template from an antechamber of the MCM central channel.This route provides an explanation for how the advancing CMG helicase could bypass a protein roadblock on the lagging strand.Several factors have been implicated in modulating replisome progression in difficult-to-replicate regions.For example, Mcm10, a firing factor essential for replication fork establishment, has been proposed to change the mode of CMG-fork-nexus engagement, hence facilitating lagging-strand roadblock bypass.However, the Mcm10-CMG interaction is dynamic and could not be characterized in previous EM imaging attempts.A second factor implicated in modulating replisome progression is the fork-stabilization complex, which is formed by Mrc1, Csm3, and Tof1 and competes for the same binding site as Mcm10 on the CMG.According to in vitro reconstitution studies with purified yeast proteins, cellular rates of DNA replication can be achieved when the reconstituted replisome is supplemented with the MTC assembly.We postulated that at least some of these factors might interact with DNA at the replication fork and, perhaps, select for a substrate engagement mode productive for unwinding.To test our hypothesis, we used yeast proteins to reconstitute a Pol epsilon-CMG-Csm3-Tof1-Mrc1 replisome complex on DNA, using the fork-affinity purification strategy described above.A full Pol epsilon-CMG-Csm3-Tof1-Mrc1 complex could be reconstituted on DNA-bound beads and the fork-bound complex could be eluted with biotin, in quantities sufficient for negative-stain EM analysis.2D averaging revealed a mixture of different complexes, including CMG, CMG-Pol epsilon, and a class containing a distinctive protein feature on the N-terminal side of the MCM ring.This density maps to a position distinct from that occupied by Ctf4 and disappears in a Csm3-Tof1 dropout Pol epsilon-CMG-Mrc1-DNA complex.Our results are consistent with the notion that Csm3-Tof1 is positioned at the front of the helicase.This model is further supported by the observation that a CMG-Csm3-Tof1 assembly retains the N-terminal feature in the absence of Mrc1 and Pol epsilon.After carefully inspecting the Csm3-Tof1 density in the Pol epsilon-CMG-MTC averages, we noted that these components depart from the N-terminal tier of MCM following the same angle seen for the parental duplex DNA in our previously reported yeast Pol epsilon-CMG-DNA complex.This observation prompted us to postulate that the MTC complex might interact with incoming parental DNA.In line with this notion, we observed that the MTC complex could be purified using DNA-affinity methods either with a fork or blunt duplex DNA as bait.Dropout experiments indicate that the DNA interaction is mainly supported by Csm3-Tof1, as isolated Mrc1 could not form a stable DNA complex in our bead-based assay.This finding is further confirmed by electrophoretic mobility shift assays indicating that, compared to Mrc1, Csm3-Tof1 has higher affinity for forked or blunt duplex DNA.Gel-shift assays in the presence of an anti-calmodulin binding protein antibody or an anti-FLAG antibody indicate that the DNA binding function is provided by MTC factors and not by uncharacterized contaminants.Currently, a DNA binding function has been previously described for Mrc1 and Csm3/Tof1 homologs in S. pombe and humans.Combined with the information that Csm3-Tof1 forms a stable assembly with DNA in our fork-affinity purification assay, we suggest that these fork-stabilization factors are likely to contact parental duplex DNA in front of the helicase.In agreement with this hypothesis, although ATPγS-CMG only loosely engages parental DNA, Csm3-Tof1 significantly restrains duplex-DNA oscillation at the replication fork.Future efforts will be focused on visualizing replication-fork advancement using a replisome progression complex containing the fork-stabilization factors, Csm3-Tof1-Mrc1.These endeavors will hopefully explain whether fast and efficient replisome progression is achieved thanks to MTC contacting incoming parental DNA at the fork or alternatively because of a structural change induced in the CMG complex.While DNA bound by isolated ATPγS-CMG is highly flexible, addition of Mrc1-Csm3-Tof1 restrains the parental duplex DNA in one configuration with respect to the helicase central pore.This property of the fork-stabilization complex might play a key role in increasing coupling efficiency between single-stranded DNA translocation and fork unwinding.In the present study, we have analyzed the CMG helicase on a model DNA fork using conditions that allow for ATP-dependent DNA translocation.As observed in other hetero-hexameric AAA+ motors, and inferred from studies on homo-hexameric helicases, the translocation substrate can be found in different positions around the MCM ring as the helicase unwinds the fork.In three of our four structures, single-stranded DNA is engaged by four ATPase subunits via a set of pore loops arranged in a right-handed spiral.In general, ATP binding appears to promote DNA binding and establish the AAA+ staircase structure, while DNA-free subunits are instead ADP bound.One might argue that, from our structural data alone, we cannot rule out that ATPase firing is stochastic.However, rotary cycling appears to make physical sense, resulting in efficient vertical movement from N- to C-terminal MCM, with the 3′ end of DNA leading the way inside the ring channel.Importantly, our mutational data provide strong evidence in support of this translocation mechanism.In our model, translocation involves a clockwise rotation, with the 3′ DNA moving toward the observer when the motor is viewed from the C-terminal end.To represent this sequential rotary cycling movement, a video can be generated by morphing between states 2B→2A→1A.According to the “canonical” rotary hand-over-hand mechanism, an ATP-bound subunit would engage the substrate at the N-terminal end of the staircase, hydrolyse ATP at the penultimate position of the staircase, remain ADP bound at the top of the staircase, and exchange the nucleotide as it transitions from the top to the bottom of the staircase.This scheme is followed upon transitioning from states 2B to 2A, where we can model a one-subunit step with a two-nucleotide advancement.In this transition, Mcm3 disengages from the C-terminal end of the spiral as a consequence of ATP hydrolysis, while ATP-Mcm2 joins the DNA-interacting AAA+ staircase from the N-terminal end.Consistent with this modeled transition, we show that ATP hydrolysis at the Mcm3-7 interface is essential for translocation, given that an Mcm3 Arg Finger RA change abrogates unwinding.Conversely, ATP binding at the Mcm5-3 interface is strictly required for activity, given that a Walker-A KA mutation in Mcm3 abrogates translocation, whereas a Mcm5 RA change only partially affects DNA unwinding.Collectively, this evidence supports a model in which Mcm53 can transition as an ATP-stabilized rigid dimer, as predicted by morphing between states 2B and 2A.Mechanism of ATP-hydrolysis driven single-stranded-DNA translocation by the CMG helicase.Asymmetric DNA engagement around the MCM ring explains the asymmetric ATPase requirements for DNA unwinding.Modeling of the subsequent state 2A-to-1A transition reveals a two-subunit step, promoted by ATP binding to Mcm5, which tightens the Mcm25 interface and causes the Mcm5/3 dimer to join the AAA+ staircase.This morphed transition is supported by the observation that an Mcm5 Walker-A KA change abrogates unwinding.Conversely, ATP hydrolysis at the C-terminal end of the staircase appears dispensable, given that Mcm7 tolerates an Arg Finger RA mutation with minimal effect on unwinding.By combining our DNA unwinding and structural data, we establish that ATP hydrolysis within the MCM ring is likely to occur at the third inter-subunit interface, counting from the 5′-interacting, N-terminal end of the AAA+ staircase.As our structural and DNA unwinding data support a two-subunit step upon transition from states 2A to 1A, we note that the Mcm6/4 interface would never be found in the ATP-hydrolysis-competent third position of the AAA+ staircase.This observation leads to the prediction that ATPase function might be dispensable at this interface, and indeed, an Mcm4 Arg Finger RA change manifests near-wild-type DNA unwinding activity in our assay, mirroring the effect of an Mcm6 KA change targeting the same inter-protomer interface.At present, we only observe three rotational states around the MCM ring.We readily concede that other translocation intermediates might exist.Crucially, however, we note that with the modeled transitions between our observed rotational states we can explain why selected ATP hydrolysis and ATP binding functions are strictly required to support DNA translocation.At the same time, our model provides a rationale for why ATP binding or hydrolysis is dispensable in other sites.Overall, the modeled transitions are supported by available mutational data and collectively describe not only a mechanism for the ATP-powered translocation of single-stranded DNA by the CMG, but also explain the functional asymmetry of the MCM ring.In addition, the unique protein-DNA contacts in state 2A explain how a two-subunit step can physically occur.Unlike states 1A and 2B, where seam subunits are disengaged from any DNA interaction, in state 2A, the h2i pore loop of Mcm3 engages DNA upstream of the staircase-engaged DNA.Upon transitioning from state 2B to state 2A, Mcm3 lets go of the PS1h-mediated staircase-DNA interaction, while the h2i pore loop reaches for single-stranded DNA entering the N-terminal side of the MCM ring.The subsequent exchange of ADP for ATP by Mcm5 causes the double step by promoting the en bloc engagement of Mcm5/3 with the four-subunit DNA staircase.In this state both Mcm5 and Mcm3 subunits fully engage four nucleotides of single-stranded DNA.These actions generate rotational movement around the motor of the CMG and also vertical movement of single-stranded DNA through the MCM ring.While we do see a small gap at the Mcm2-5 interface arising in state 2A, the ring remains planar and compressed, indicating that a spiral-to-planar transition is not required for helicase advancement, as has instead previously been suggested,Strikingly, fork-nexus engagement is altered in different rotational states, as observed on the N-terminal side of the MCM ring.We show that the fork-stabilization factors Csm3-Tof1 bind to duplex DNA in vitro, associate with the N-terminal domain of MCM, and align with incoming parental DNA at the fork.We postulate that these factors might play a role in selecting and stabilizing productive substrate engagement for CMG translocation, hence strengthening the coupling between DNA rotation and unwinding at the fork.Coherent with this notion, Tof1 and Csm3 have been implicated in inhibiting excessive fork rotation and precatenation, hence preventing genomic instability.Our structural data have significant implications for the mechanism of ATP-dependent origin DNA melting at the onset of replication.The structures presented here provide conclusive evidence that the same AAA+ elements in the MCM ring can touch DNA in opposing orientations between the inactive and active helicase forms.In the MCM double hexamer, the ATPase pore loops of Mcm3 and Mcm5 touch the leading-strand template, whereas Mcm2/4/6/7 ATPase pore loops contact the lagging-strand template.Conversely, in ATP-CMG state 2A, the same pore loops from the six MCM subunits solely touch the leading-strand template, running 3′-to-5′ from C to N.To understand the relevance of this finding, it is useful to describe the steps that lead to origin activation, as is currently understood.The duplex-DNA-loaded MCM forms a double hexamer and is inactive; however, it undergoes a conformational change upon recruitment of the GINS and Cdc45 activators, which promotes ADP release and ATP binding, and the concomitant untwisting of duplex DNA by half a turn of the double helix.A transient interaction with Mcm10 subsequently leads to the activation of the ATP hydrolysis function and ejection of the lagging strand template from the MCM ring pore.Concomitant with these two events, the CMG unwinds one whole turn of the double helix.Our data provide a rationale for how the lagging strand can be disengaged and actively expelled from the ATPase ring pore, as ATP-dependent, leading-strand translocation ensues.One complete round of ATPase firing around the MCM ring would unwind one turn of the double helix and cause the leading-strand template to engage, in different steps, all AAA+ protomers around the ring.This event would occur irrespective of whether the ATP hydrolysis is stochastic or sequential, causing the translocation strand to “sweep” across the inner perimeter of the MCM pore, dislodging the lagging strand for its unique ATPase binding site.Mobilization of the leading strand inside the MCM ring would in turn put the lagging strand template into a high-energy state favoring displacement.At this stage, lagging-strand escape from the helicase pore would be allowed by the opening of an MCM ring gate, which has been proposed to be Mcm10 mediated.An MCM single-stranded DNA binding element could play an important role in this process.Mapping to a position underneath the h2i pore loops on the N-terminal domain of the adjacent Mcm4/6/7 protomers, the MSSB could stabilize the lagging-strand template as it is being displaced from its AAA+ binding site, en route to ejection from the helicase ring pore.Future structural characterization of the origin unwinding reaction will be key for a full mechanistic understanding of this process.While DNA unwinding by the CMG occurs with 3′-to-5′ polarity, multiple lines of evidence indicate that the MCM motor is capable of interacting with DNA in either direction.We note that promiscuity of substrate binding by ATPase pore loops is not a novelty for AAA+ motors.One example is the Rpt1-6 motor of the proteasome, which can translocate with equal efficiency on polypeptides threaded into the pore with N-to-C or C-to-N polarity.According to our CMG model, the promiscuity of DNA engagement is a core feature in the mechanism of ATP-dependent origin DNA unwinding.This model reconciles 15 years of diverging findings on the directionality of DNA engagement by the replisome.Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, Alessandro Costa.Material will be made available upon reasonable request.This study did not generate new unique reagents.Yeast proteins were purified from Saccharomyces cerevisiae strains) containing integrated expression constructs and grown at 30°C in YEP media supplemented with 2% raffinose.Drosophila melanogaster proteins were purified from baculovirus-infected female High-five cells incubated at 27°C, as previously described.Viruses used in the studies were constructed following the manufacturer’s manual for the Bac-to-Bac expression system from Invitrogen.The pFastBac1 recombination vectors containing the cDNAs for all 11 wild-type CMG subunits were described in detail before.These vector templates were used for generation of CMG mutants using PCR-based mutagenesis.Arginine finger residues were targeted and alanine substitutions were introduced to generate the RA point mutant.These include R641 in MCM2, R473 in MCM3, R645 in MCM4, R510 in MCM5, R521 in MCM6, and R514 in MCM7.Purified Mrc1 and/or Csm3-Tof1 was serially diluted in Protein Binding Buffer and preincubated on ice for 5 min.Diluted samples were subsequently mixed with 300 nM duplex DNA or DNA forks in 10 μl reactions and incubated for 30 min on ice.Protein-DNA complexes were resolved by polyacrylamide gel electrophoresis using an 4% polyacrylamide gel ran at 100 V for 90 min in 0.5x TAE buffer after which nucleic acids were visualized by staining with SYBR Safe.To verify that the DNA binding function is contained in Csm3-Tof1 and Mrc1 and not in uncharacterized contaminant proteins, we performed native PAGE super-shift assays using antibodies specific for the CBP-Csm3 or the FLAG-Mrc1.DNA-Csm3-Tof1 complexes were pre-assembled by mixing Csm3-Tof1 and DNA-fork substate at a concentration of 200nM and 300nm respectively.DNA-Mrc1 complexes were pre-assembled by mixing Mrc1 and fork at a concentration of 1600nM and 300nM respectively.The pre-assembled complexes were mixed with anti-CBP antibody or anti-FLAG antibody to respectively assay DNA-Csm3-Tof1 or DNA-Mrc1 complex formation.The DNA-Csm3-Tof1 anti-CBP super-shift was resolved using a 4% PAGE run in 0.5x TAE.The DNA-Mrc1 anti-FLAG super-shift was resolved using a 1.2% agarose gel, run in 0.2x TB.Both gels were stained with SYBR Safe.300-mesh copper grids with a continuous carbon film were glow-discharged for 30 s at 45 mA with a 100x glow discharger.4-μl samples were applied to glow-discharged grids and incubated for 1 minute.Following blotting of excess sample, grids were stained by stirring in four 75-μl drops of 2% uranyl acetate for 5, 10, 15 and 20 s respectively.Excess stain was subsequently blotted dry.400-mesh lacey grids with a layer of ultra-thin carbon were glow-discharged for 1 min at 45 mA with a 100x glow discharger.4-μl fork-bound DmCMG eluted with ATP was applied to glow-discharged grids and incubated for 2 minutes.Excess sample was subsequently blotted away for 0.5 s using a Vitrobot Mark IV at 4°C and ∼90% humidity.To increase particle concentration, a second 4-μl sample was applied to blotted grids and incubated for 2 minutes.Following blotting for 3 s the sample was plunge-frozen into liquid ethane.Data were acquired on a FEI Tecnai LaB6 G2 Spirit electron microscope operated at 120kV and equipped with a 2K x 2K GATAN UltraScan 1000 CCD camera.Micrographs were collected at x30,000 nominal magnification with a defocus range of −0.5 to −2.5 μm.High-resolution cryo-EM data were acquired on a Titan Krios operated at 300kV and equipped with a K2 Summit detector operated in counting mode with 30 frames per movie.Micrographs were collected at x130,000 nominal magnification using a total electron dose of 50 e/Å2 and a defocus range of −2.0 to −4.1 μm.All particles were picked semi-automatically using e2boxer in EMAN2 v2.07 and contrast transfer function parameters were estimated by Gctf v1.18.All further image processing was performed in RELION v2.1.Particles were extracted with a box size of 128 pixels for initial reference-free 2D classification and CTF was corrected using the additional argument–only_flip_phases.To allow visualization of roadblocks in fork affinity purified CMG samples, helicase side views were selected for further rounds of 2D classification following particle re-extraction using a larger box size.ScCMG samples eluted with ATPγS or ATP showed 5,558 and 3,933 side-view particles respectively, out of which 587 and 468 displayed roadblock densities.Similarly, DmCMG samples eluted with ATPγS or ATP showed 4,875 and 31,485 side-view particles respectively, out of which 1,712 and 8,227 displayed additional roadblock densities.DmCMG samples with leading and lagging strand M.HpaII roadblocks eluted with ATPγS or ATP showed 16,420 and 25,397 side-view particles respectively, out of which 820 and 4,162 displayed additional roadblock densities.The 30-frame movies collected were corrected for beam-induced motion using 5 × 5 patch alignment in MotionCor2 whereby all frames were integrated.CTF parameters were estimated on non dose-weighted micrographs by Gctf v.1.18.Particles were picked using crYOLO of the SPHIRE software package.All subsequent image processing was performed in RELION-3 and cryoSPARC.An initial dataset of 3,296,333 binned-by-3 particles were extracted from 19,097 dose-weighted micrographs with a box size of 128 pixels.After two rounds of 2D classification 1,151,231 high-resolution CMG averages were selected and re-extracted as unbinned particles with a box size of 384 pixels.An initial 3D structure was generated by homogeneous refinement in cryoSPARC using a previous structure of DNA-bound CMG low-pass filtered to 30 Å as a starting model.The resulting CMG structure was subjected to three-dimensional classification with alignment in RELION that yielded 2 high-resolution classes with different DNA-binding modes in the central MCM channel.The remaining structures appeared severely anisotropic and were discarded.The largest of the two structures after initial 3D classification was 3D refined in RELION followed by Bayesian particle polishing and one round of CTF refinement to solve a structure at 3.46 Å resolution.To better resolve DNA densities in the MCM central channel, ATPase domains were subtracted and the resulting particles were analyzed by 3D classification in RELION.In parallel efforts, focused 3D classification was performed on the ATPase domain of State 1 unsubtracted particles.These endeavors resulted in the identification of two states with a one-subunit register shift.3D refinement, followed by Bayesian particle polishing and one round of CTF refinement of these structures allowed us to solve two structures at 3.70 Å and 3.99 Å resolution respectively.The smaller of the two structures after initial 3D classification was 3D refined in RELION and subjected to one additional round of 3D classification with alignment that eliminated some residual anisotropy and led to the determination of a structure from 117,560 particles.Following 3D refinement, Bayesian particle polishing and two rounds of CTF refinement, this structure was solved to 4.23 Å resolution.Further 3D classification of this particle subset, focused on the AAA+ domain, resulted in the identification of two structures with DNA-binding register shifted by one subunit.3D refinement, Bayesian particle polishing and one round of CTF refinement of these structures allowed us to refine two structures at 4.28 Å and 4.46 Å resolution respectively.An alternative initial 3D structure was generated by homogeneous refinement in cryoSPARC following less stringent 2D classification.Further processing of this particle subset, including two rounds of heterogeneous refinement in cryoSPARC, allowed us to determine an alternative structure at 3.88 Å resolution with ATPase DNA-binding similar to that of State 1, but with lagging strand density projecting from the N-terminal side of the helicase.Homology models for Drosophila CMG were obtained using Swiss-Model.The cryo-EM maps generated with RELION were sharpened with phenix.auto_sharpen using resolution rage between 3.3 and 6 Å.To handle residual anisotropy in the structures three flags were employed, local_sharpening; local_aniso_in_local_sharpening and remove_aniso.While homology models for GINS and Cdc45 were initially docked as rigid bodies, MCM subunits were first split in three rigid bodies with restrains for secondary structure elements and for planarity in the base pairing.The atomic models were corrected with Coot according to map density, geometries and chemistry.ATP and ADP molecules were manually fitted into densities.Single-stranded DNA was built by hand following the phosphate backbone and bases densities in Coot.The final atomic models were refined using phenix.real_space_refine with restrains for secondary structure elements and for base pair planarity.The quality of the atomic models was evaluated with the comprehensive cryo-EM validation tool in Phenix using the atomic models corrected with Coot and the maps generated by Relion Refine3D, as recommended in Afonine et al.Inter-protomer buried area was measured using the PDBe-PISA webserver, between each pair of neighboring MCM AAA+ domains or between each nucleotide and the opposed, Arg-finger providing ATPase module.All yeast proteins were expressed in S. cerevisiae cells and harvested following the same procedure.Cells were grown at 30°C in YEP media supplemented with 2% raffinose.At a cell density of ∼2-3x107 cells/ml, expression was induced for 3 hours by the addition of 2% galactose.Cells were harvested by centrifugation at 5,020 x g for 30 min at 4°C.After washing pellets in lysis buffer cells were re-centrifuged at 4,000 x g for 20 min at 4°C.Cells were subsequently resuspended in lysis buffer at half pellet volumes, flash frozen in liquid nitrogen and crushed at −80°C using a 6875D Freezer/Mill® Dual Chamber Cryogenic Grinderfreezer mill at intensity 15.ScCMG was expressed and purified as previously described using the yeast strain yJCZ3.Following harvesting in CMG lysis buffer) the cell powder was resuspended in Buffer C-100 supplemented with 10 mM Mg2, 25 units/ml benzonase and complete protease inhibitor tablets.The lysate was incubated at 4°C for 45 minutes and cleared by ultracentrifugation at 235,000 x g for 60 minutes at 4°C.Clear supernatants were incubated for 3 hours at 4°C with 4 mL anti-FLAG M2 agarose beads pre-equilibrated in Buffer C-100.Beads were subsequently washed with 150 mL Buffer C-100 after which bound proteins were eluted by incubation at room temperature for 30 minutes with the same buffer supplemented with 500 μg/ml FLAG peptide and complete protease inhibitor tablets.The eluate was collected and further proteins were eluted by repeating the FLAG peptide incubation for an additional 20 minutes.Combined eluates were passed through a 1 mL HiTrap SPFF column and injected onto a MonoQ 5/50 GL column, both equilibrated in Buffer C-100.Proteins were washed with 10 CV of the same buffer and eluted with a 100-550 mM KCl gradient over 20 CV in Buffer C.CMG peak fractions were diluted in Buffer C to 150 mM KCl and injected onto a MonoQ 1.6/5 PC column equilibrated in Buffer C-150.Proteins were washed with 10 CV of the same buffer and eluted with a 150-550 mM KCl gradient over 15 CV in Buffer C. CMG peak fractions were dialysed against Protein Binding Buffer2, 5% glycerol, 0.02% NP-40, 1 mM DTT) for 3 hours at 4°C.ScPolε was expressed and purified as previously described using the yeast strain yAE99.Following harvesting in Buffer E-500 supplemented with complete protease inhibitor tablets, the cell powder was resuspended in Buffer E-400 supplemented with complete protease inhibitor tablets.The lysate was incubated at 4°C for 45 minutes and cleared by ultracentrifugation at 235,000 x g for 60 minutes at 4°C.Clear supernatants were supplemented with 2 mM CaCl2 and incubated for 2 hours at 4°C with 3 mL Calmodulin Affinity Resin pre-equilibrated in Buffer E-400.Beads were subsequently washed with 300 mL Buffer E-400 supplemented with 2 mM CaCl2 after which bound proteins were eluted by incubation at 4°C with Buffer E-400 supplemented with 2 mM EDTA and 2 mM EGTA.Pooled elutions were injected onto an SP Sepharose Fast Flow 1 mL column attached to a MonoQ 5/50 GL column and washed with 20 CV Buffer E-400.Following removal of the SP Sepharose Fast Flow column, proteins were eluted with a 400-1,000 mM KOAc gradient over 15 CV in Buffer E.Polε fractions were pooled, dialysed against Buffer E-400 and concentrated using a 30,000 MWCO cut-off spin column.50 μl concentrated sample was subsequently passed over a Superose 6 3.2/300 gel filtration column in Buffer E-400.ScMrc1 was expressed and purified as previously described using the yeast strain yJY32.Cells were harvested, lysed and resuspended in Buffer T-400 supplemented with complete protease inhibitor tablets.The lysate was incubated at 4°C for 45 minutes and cleared by ultracentrifugation at 235,000 x g for 60 minutes at 4°C.Clear supernatants were incubated for 2 hours at 4°C with 2 mL anti-FLAG M2 agarose beads pre-equilibrated in Buffer T-400.Beads were subsequently washed with 50 CV Buffer T-400 and 25 CV Buffer T-200 followed by incubation for 10 min in Buffer T-200 supplemented with 1 mM ATP and 10 mM Mg2.After washing beads in 10 CV Buffer T-200, bound proteins were eluted by incubation at room temperature for 45 minutes with the same buffer supplemented with 500 μg/ml FLAG peptide and complete protease inhibitor tablets.The eluate was collected and further proteins were eluted by repeating the FLAG peptide incubation for an additional 30 minutes.Combined eluates were subsequently injected onto a MonoQ 1.6/5 PC column equilibrated in Buffer T-200.Proteins were washed with 10 CV of the same buffer and eluted with a 200-600 mM NaCl gradient over 15 CV in Buffer T.Mrc1 peak fractions were dialysed against Buffer T-200.ScCsm3/Tof1 was co-expressed and co-purified as previously described using the yeast strain yAE48.Cells were harvested, lysed and resuspended in CBP lysis buffer supplemented with complete protease inhibitor tablets.The lysate was incubated at 4°C for 45 minutes and cleared by ultracentrifugation at 235,000 x g for 60 minutes at 4°C.Clear supernatants were supplemented with 2 mM CaCl2 and incubated for 2 hours at 4°C with 2 mL Calmodulin Affinity Resin pre-equilibrated in CBP lysis buffer.Beads were subsequently washed with 75 CV CBP lysis buffer supplemented with 2 mM CaCl2 after which bound proteins were eluted by incubation at 4°C with CBP lysis buffer supplemented with 2 mM EDTA and 2 mM EGTA.Pooled elutions were concentrated to 500 μl using a 30,000 MWCO cut-off spin column and passed over a Superdex 200 10/300 gel filtration column equilibrated in CBP Gel Filtration Buffer.Csm3-Tof1 peak fractions were pooled and concentrated to 100 μl using a 30,000 MWCO cut-off spin column.ScCtf4 expression plasmids were transformed into BL21 E. coli cells.Cells were grown in LB media at 37°C to an optical density of 0.5 before expression was induced with 1 mM IPTG for 3 hours.Cells were harvested by centrifugation at 5,020 x g for 20 min at room temperature.Pelleted cells were subsequently resuspended in Ctf4 lysis buffer supplemented with complete protease inhibitor tablets and lysed by sonication.Lysate was cleared by centrifugation at 27,216 xg for 30 min at 4°C and incubated for 90 min at 4°C with 1 mL Ni-NTA Agarose Resin pre-equilibrated in Buffer A-20.After washing resin with 20 CV Buffer A-20, proteins were eluted five times with 1 mL Buffer A-250.Elutions were pooled and dialysed against Buffer B-100 before injection onto a MonoQ 5/50 GL column equilibrated in Buffer B-100.Proteins were washed with 10 CV of the same buffer and eluted with a 100-1,000 mM NaCl gradient over 30 CV in Buffer B.Ctf4 peak fractions were pooled and concentrated to 450 μl using a 30,000 MWCO cut-off spin column before being passed over a Superdex 200 16/600 gel filtration column equilibrated in Buffer B-150.Ctf4 trimer peak fractions were pooled and concentrated to 4 mg/ml using a 30,000 MWCO cut-off spin column.Drosophila melanogaster CMG was expressed and purified as previously described.Following bacmid generation for each subunit of DmCMG, Sf21 cells were used for transfection and virus amplification stages to generate P2 stocks using serum-free Sf-900TM III SFM insect cell medium.In the P3 virus amplification stage, 100 mL Sf9 cell cultures were infected with 0.5 mL of P2 stocks with an approximate MOI of 0.1 for each virus and incubated in 500 mL Erlenmeyer sterile flasks for 4 days at 27°C, shaking at 100 rpm.After 4 days, 4 L of Hi-Five cells supplemented with 10% FCS were infected using fresh P3 stocks with MOI of 5.Cells were incubated at 27°C and harvested after 60 hours.Cell pellets were washed with PBS supplemented with 5 mM MgCl2, resuspended in lysis buffer and frozen in 10 mL aliquots on dry ice.Protein purification was performed at 4°C.Cell pellets were thawed and lysed by applying at least 50 strokes per 30 mL of cell pellets using tissue grinders after which the lysate was cleared by centrifugation at 24,000 x g for 10 min.Supernatants were incubated for 2.5 hours with 2 mL ANTI-FLAG M2 agarose beads pre-equilibrated with Buffer C. Non-bound proteins were removed by centrifugation at 200 x g for 5 minutes followed by bead washing with 30 mL of Buffer C-100.Bound proteins were subsequently eluted by incubation at room temperature for 15 min with Buffer C-100 supplemented with 200 μg/ml FLAG peptide.The eluate was passed through a 1 mL HiTrap SPFF column and injected onto a MonoQ 5/50 GL column, both equilibrated in buffer C-100.Proteins were washed with 10 CV of the same buffer and eluted with a 100-550 mM KCl gradient over 20 CV in buffer C. CMG peak fractions were diluted in buffer C to 150 mM KCl and injected onto a MonoQ 1.6/5 PC column equilibrated in buffer C-150.Proteins were washed with 10 CV of the same buffer and eluted with a 150-550 mM KCl gradient over 15 CV in Buffer C. CMG peak fractions were dialysed into Protein Binding Buffer2, 5% glycerol, 0.02% NP-40, 1 mM DTT) for 2 hours.A list of oligonucleotides is provided in Table S3.Roadblock experiments.To prepare Cy5-labeled fork DNA substrate containing a single MH roadblock on the leading-strand template, oligonucleotides A, B and C were annealed at equimolar concentrations, and the resulting nick was sealed with T4 DNA ligase.DNA was purified via electroelution after separating on 8% PAGE.M.HpaII was crosslinked in methyltransferase buffer supplemented with 100 μM S-adenosylmethionine, and incubated at 37°C for 3 hours.M.HpaII crosslinked substrate was separated on 8% PAGE and purified via electroelution.Cy5-labeled fork DNA with a lagging-strand MH roadblock was prepared by annealing oligonucleotides D and E.The substrate was gel purified, crosslinked to MH, and re-purified as described above.For unwinding assays, Drosophila CMG was first bound to fork DNA by incubating 3-5 nM DNA substrate with 30 nM Drosophila CMG in CMG-binding buffer supplemented with 0.1 mM ATPγS in 5 μL volume at 37°C for 2 hours.To initiate unwinding, 15 μL ATP mix was added into the reaction.The ATP mix contained 1.5 μM 40 nt polyT oligonucleotide to capture free CMG and 150 nM oligonucleotide with the sequence 5′-GGATGCTGAGGCAATGGGAATTCGCCAACC-3′ to prevent re-annealing of DNA.After further 30 min incubation at 37°C, reactions were stopped with SDS-containing buffer, separated on 8% PAGE, and imaged on Fujifilm SLA-5000 scanner using 635-nm laser and LPR/R665 filter.A 70-mer oligonucleotide was designed such that 40 nucleotides anneal to a M13mp18ssDNA plasmid, leaving a 30-mer polyT extension at the 5′ end.The 5′ end was previously radioactively labeled with γ-32P ATP and T4 polynucleotide kinase, subsequently purified through a llustra MicroSpin G-50 column and mixed with the M13mp18 ssDNA plasmid.The reactions were heat-denatured for 1 minute and annealed through gradual cooling to room temperature.Free oligonucleotide was separated by purification through MicroSpin S-400 HR columns.The helicase assays were carried out in 25mM HEPES pH 7.6, 10% glycerol, 50mM sodium acetate, 10mM magnesium acetate, 0.2mM PMSF, 1mM DTT, with addition of 250 μg/ml insulin.Desired protein concentrations were mixed with 1-2fmol of a circular M13 based DNA substrate and unwinding initiated in the presence of 0.3mM ATP in a total reaction volume of 10 μL at 30°C.Reactions were stopped after 30 minutes by addition of 0.1% SDS and 20mM EDTA, and the reaction products were immediately electrophoretically separated on a TBE-acrylamide gel.To prepare desthiobiotin-tagged, M.HpaII-labeled DNA fork substrates containing two M.HpaII on the leading-strand template, oligonucleotides F, G, H and I were annealed at a 1:1:2:1 molar ratio, and the resulting nicks were sealed with T4 DNA ligase.DNA was purified via electroelution after separating on 8% PAGE.M.HpaII was crosslinked in methyltransferase buffer supplemented with 100 μM S-adenosylmethionine, and incubated at 37°C for 5 hours.M.HpaII crosslinked substrate was separated on 8% PAGE and purified via electroelution.To prepare desthiobiotin-tagged MH-labeled DNA fork substrates containing one MH on the leading-strand template and one M.HpaII on the lagging-strand template, oligonucleotides J + K and L + M were annealed separately at equimolar concentrations.The annealed oligonucleotide samples were then mixed and nicks were sealed with T4 DNA ligase.The substrate was gel purified, crosslinked to M.HpaII, and purified as described above.To isolate DNA-bound CMG complexes, a fork affinity purification approach was adapted from a previously published method.Here, desthiobiotin-tagged DNA forks were immobilised onto streptavidin-coated magnetic beads.6 μl M-280 Streptavidin Dynabeads® slurry was added to each reaction tube and washed twice in 20 μl DNA Binding Buffer.Washed beads were resuspended in 20 μl 250 nM MH-conjugated DNA forks and incubated for 30 minutes at 30°C shaking at 1,250 rpm in a thermomixer.All subsequent incubations were performed at the same conditions.Following fork immobilisation, supernatants were discarded and beads were washed once in DNA Binding Buffer and once in Protein Binding Buffer.Fork-bound beads were subsequently resuspended in 250 nM CMG supplemented with 2 mM ATPγS and incubated for 30 minutes.Supernatants were collected to eliminate non-bound CMG and beads were washed twice in Protein Binding Buffer with 2 mM ATPγS.CMG-bound DNA-forks were eluted from beads by resuspension in 10 μl Elution Buffer2, 5% glycerol, 0.02% NP-40, 1 mM DTT, 400 nM biotin) supplemented with 2 mM ATPγS or 5 mM ATP followed by incubation for 30 minutes.The ATP elution with forks harboring both leading and lagging strand roadblocks was also supplemented by 1 μM of oligonucleotide Q. Supernatants were pooled and used for negative stain or cryo-EM grid preparation.To prepare desthiobiotin-tagged fork DNA substrates, oligonucleotides N and O were annealed at a 1:1.2 molar ratio and subsequently immobilised onto streptavidin-coated magnetic beads as described above.250 nM CMG was mixed with 350 nM MCT in the presence of 2 mM ATPγS and incubated on ice for 5 min.Following washing in Protein Binding Buffer, fork-bound beads were resuspended in the CMG-MCT sample and incubated for 30 min before addition of 80 nM Polε and co-incubation for another 15 min.Protein-bound DNA-forks were washed twice in Protein Binding Buffer with 1 mM ATPγS and eluted from beads by resuspension in 12 μl Elution Buffer supplemented with 1 mM ATPγS or ATP.Supernatants were pooled and used for negative stain EM grid preparation.In a parallel experiment, the same affinity purification was performed in the absence of Polε using a DNA fork labeled with two leading strand M.HpaII-conjugates.To prepare desthiobiotin-tagged duplex DNA substrates, oligonucleotide P was PCR-amplified using primers R and S. Desthiobiotin-tagged fork DNA substrates were prepared as in above CMG-Polε-Mrc1-Csm3-Tof1 affinity purification.250 nM DNA constructs were immobilised onto streptavidin-coated magnetic beads as described above.Following washing in Protein Binding Buffer, fork-bound beads were resuspended in 20 μl 350 nM Mrc1, 350 nM Csm3-Tof1 or 350 nM MCT pre-incubated on ice for 5 min.Protein-DNA samples were incubated for 30 min at 30°C shaking at 1,250 rpm.Beads were washed twice in Protein Binding Buffer after which DNA-bound proteins were eluted by resuspension in 12 μl Elution Buffer.100 μl 250 nM CMG supplemented with 2 mM ATPγS was added to 100 μl Calmodulin Affinity Resin equilibrated in Protein Binding Buffer and incubated for 2 hours at 4°C.Beads were subsequently washed in 100 μl PBB with 1 mM ATPγS and resuspended in 100 μl 500 nM Ctf4 supplemented by 1 mM ATPγS.Following incubation for 30 min at 30°C shaking at 1,250 rpm the beads were washed twice in PBB with 1 mM ATPγS and resuspended in 50 μl CBP Elution Buffer supplemented by 1 mM ATPγS.After incubation for an additional 30 minutes the supernatant was separated and incubated with 0.01% glutaraldehyde for 5 min.Cross-linked samples were immediately applied to EM grids for negative staining.100 μl 200 nM CMG supplemented with 2 mM ATPγS was added to 100 μl anti-FLAG M2 agarose beads equilibrated in Protein Binding Buffer with 5 mM MgOAc and incubated for 2.5 hours at 4°C.Beads were subsequently washed in 450 μl PBB with 5 mM MgOAc and 1 mM ATPγS and resuspended in 100 μl 500 nM Csm3-Tof1 supplemented by 1 mM ATPγS.Following incubation for 30 min at 4°C shaking at 1,250 rpm the beads were washed twice in PBB with 5 mM MgOAc and 1 mM ATPγS and resuspended in 100 μl FLAG Elution Buffer supplemented by 1 mM ATPγS.After incubation for an additional 30 minutes at room temperature the supernatant was separated and applied to EM grids for negative staining.Quantification, statistical analysis and validation pertaining to processing of negative stain and cryo-EM images are implemented in the software described in the image processing section of the methods details.Global resolution stimation of refined cryo-EM maps are based on the 0.143 cutoffs of the Fourier Shell Correlation between two half maps refined independently.CMG-DNA maps and atomic models have been deposited with the Electron Microscopy Data Bank and the Protein Data Bank under the following accession codes: State 1A, EMD-4785, PDB 6RAW; State 1B, EMD-4786, PDB 6RAX; State 2A, EMD-4787, PDB 6RAY; State 2B, EMD-4788, PDB 6RAZ.A reporting summary for this article is available in Supplementary Information.We have not generated a new website or forum.
Eickhoff et al. used cryo-EM to image DNA unwinding by the eukaryotic replicative helicase, Cdc45-MCM-GINS. As the hexameric MCM ring hydrolyses ATP, DNA is spooled asymmetrically around the ring pore. This asymmetry explains why selected ATPase sites are essential for DNA translocation. Understanding DNA unwinding informs on replication fork progression.
449
Voluntary non-monetary approaches for implementing conservation
While protected areas remain the most recognized tool used for biodiversity conservation, their extent does not guarantee the future persistence of global biodiversity.There is an urgency to find effective ways of safeguarding nature for remaining biodiversity outside protected areas.There, expanding human presence poses a growing threat to biodiversity through increasing demand for food, fibre, fuel and other commodities.Urban sprawl, driven by a steadily increasing urban population, is expected to further boost habitat fragmentation and pose additional pressures on ecosystems and wildlife.Consequently, making human-dominated landscapes more hospitable for biodiversity has been recognized as a fundamental strategy to help preserve global biodiversity.Although Walton Hall, UK, which is widely considered as the first modern nature reserve, was established in the 1820s by a private individual, the role of private conservationists is poorly acknowledged despite the roles they can play outside protected areas established by governments and conservation organizations.This is particularly so in the developed world, where private land covers large areas.For example, about half of the US federally listed species have at least part of their range within private land."In Europe, most of the land in the Natura2000 network—wide network of nature protection areas, the centrepiece of EU's nature and biodiversity policy; http://ec.europa.eu/environment/nature/natura2000/index_en.htm) is privately owned.Therefore, conservation efforts implemented on private land play a key role in biodiversity protection; an exceptional example is the privately funded protection of two million acres in Patagonia through Kris and Douglas Tompkins.Biodiversity conservation on private land presents opportunities, but also involves challenges brought about by the social dimension that ultimately contributes to determine costs and availability of land for implementation of conservation.The realization that nature conservation on private land is largely a social challenge has triggered a paradigm shift, from top–down to bottom–up approaches.Among the latter, voluntary programmes represent a widely accepted policy tool for biodiversity conservation on private land.But, despite being voluntary, these are frequently market-based.The voluntary market-based approach for conservation on private land was developed with the rationale of an equitable and fair sharing of costs borne by the individual landowner and public benefits resulting from biodiversity conservation.In this approach, land owners are given monetary compensations for the costs or lost benefits of implementing conservation actions.Thus, the approach entails high, and progressively increasing, costs to conservation budgets because biodiversity conservation on private land is often expensive.Where such considerable costs have been met, the results, in terms of ecological benefits, have been mixed, partly due to the heterogeneity of landowners implementing them.A growing body of evidence suggests that market-based approaches to conservation, albeit effective and relevant in many cases, are not always sustainable in the long term.On the other hand, means to induce individuals to change their behaviour based on intrinsic values and societal moral rather than coercive means or monetary incentives exist, but are less consistently considered in conservation.Consideration of such a voluntary but non-monetary approach is particularly relevant for conservation in modern widely modified world, and it is in line with the strategic goal of the Convention on Biological Diversity to “enhance implementation through participatory planning, knowledge management and capacity building”.In this work, we review the scientific literature for studies where a voluntary non-monetary approach to biodiversity conservation has been applied on private land.We first compare the occurrence of this approach to two more traditional ones: coercive and voluntary market-based approaches.This comparison aims to reveal the level of scientific interest given to these alternative approaches.We then analyse the literature to summarize key properties of voluntary non-monetary means for conservation on private land.Here, emphasis is given to constraints on implementation, potential benefits and emergent outcomes, and ways of enhancing participation.Finally, we illustrate how the voluntary non-monetary approach could be implemented in the case of farmland conservation actions.Our search protocol shows that at least in the international scientific literature of ecology and conservation, the voluntary non-monetary approach is seldom a subject of research compared to coercive and market-based approaches.Out of the searched 66,183 papers published in ecology and conservation biology during recent decades, only 101 hits were for voluntary non-monetary approaches, compared to a total of 2544 for coercive and 1071 for voluntary market-based.Out of the 101 hits on voluntary non-monetary approaches, only 16 actually discussed the approach, and just eight explicitly studied it.We caution that our search for papers on voluntary non-monetary actions, based on our predefined keywords, might have missed some of the literature on conservation actions that do not have an economic driver.However, we consider that the voluntary non-monetary approach occurs so much less frequently in scientific literature than the two other abovementioned approaches that it must be genuinely scarcely discussed.Even if rarely the subject of scientific interest, as the above search results suggest, it is nevertheless plausible that the voluntary non-monetary approach is often considered by practitioners, NGOs and other organizations.Indeed, many of the studies that explicitly consider a voluntary non-monetary conservation approach indicate a willingness from people to do conservation in absence of any monetary incentives at all.Voluntary approaches for nature conservation on private land have typically been treated as a single group, including both market-based and non-monetary means.Approaches within this heterogeneous group locate along a continuum between two extremes, one where financial incentives exceed costs involved and fully drive landowner motivation towards conservation, and the other, where no monetary incentives are involved and motivation is fully driven by intrinsic reasons.As de Snoo et al., point out, there is a crucial difference between voluntary approaches that use economic incentives compared to those that completely rely on the self-motivation and intrinsic values of an individual towards conservation.In this study we focus on the latter of these two extremes.A voluntary non-monetary approach primarily applies to simple actions, such as nest-box provision or leaving hedgerows uncut, that private citizens, communities, non-governmental organizations, companies, enterprises can implement in their area of influence without the motivation or need of economic incentives.Actions may target and ultimately benefit single species, ecological communities, or entire ecosystems.In order to encourage wide participation by a diversity of actors without significant education in conservation management, actions should be clearly defined, focused and justified, they must be straightforward to understand and to implement, and their implementation must require no specific scientific knowledge or new specialized equipment.Nevertheless, many volunteers may be farmers or other land managers with considerable experience and expertise in other areas and who have access to specialized equipment.Furthermore, the activity should be results-based; it should produce tangible results in a relatively short time in order to provide a non-monetary reward and a way of self-verification.In addition, overall costs of the action must be sufficiently low for them to be applicable without incentives.From the systematic literature searches described above as well as from detected documents in the grey literature, we identified a number of cases where a voluntary non-monetary approach has been used for nature conservation.By these examples, we illustrate the limited actions documented in the scientific literature about the voluntary non-monetary approach, including their take-up by citizens, and their effectiveness when reported.This selection of examples provided is not meant to be exhaustive, rather it can help identifying the main features characterizing this group of actions.Private landowners enthusiastically joined and supported a voluntary conservation programme aimed at protecting howler monkeys in Belize.Millions of nest boxes for birds have been placed in forests, farmlands and domestic gardens, and many bird populations nowadays benefit from extra food voluntarily provided at bird feeders.Off the Atlantic coast of Canada, a voluntary initiative to reduce collision risks with whales, proposed by the International Maritime Organization, was reported to have high compliance by ship vessels.In contrast, voluntary speed reduction of commercial ships from whale watching companies, as well as other transport vessels off the coasts of Massachusetts and California, had a very low compliance rate.Private citizens undertook alien plant eradication on their property within conservancies of South Africa.Forest buffers of small size were retained around raptor nests by private forest owners in order to protect them from forest logging in eastern Finland.Lead is a poisonous element impacting many bird populations.Voluntary approaches to reduce its use in ammunitions and fishing tackles have achieved broad success in the US and Canada, although not in the UK.Voluntary guidelines for land management aimed at protecting vernal pools on private land in Maine, US, achieved mixed success.Fishermen in Namibia have been voluntarily applying a simple and effective solution to greatly reduce incidental bycatch of seabirds.A common factor linking most of the examples above is the presence of a central organization that can reach potential actors and provide information about the application of the action.In addition, these actions were typically both easy and relatively cheap to implement.It thus appears that costs and operational feasibility may restrict the variety of actions suitable for implementation using a voluntary non-monetary approach.We therefore investigated in more detail one specific environment, farmland, which is predominantly privately owned and influenced by intensive management practises that have strong impacts on associated wildlife.We identified a number of actions that, within the broad context of farmland conservation, could be implemented through a voluntary non-monetary approach.We considered a list of 119 actions for farmland conservation, an authoritative source of evidence on actions for nature conservation.Its mission is to support practitioners in decision making.These actions are also summarized with overall effectiveness scores in Sutherland et al.To assess the potential suitability for voluntary non-monetary conservation, the 119 farmland actions were scored on three criteria.The first criterion was the feasibility of private untrained citizens to implement the action.The second criterion was the estimated costs of action, including management, damage and opportunity costs.Costs were estimated for the implementation of the action over one hectare of land, and were converted to work-day equivalents per year.This was done in order to bring all costs, monetary and not, to a common unit.The third and last criterion was the existence of evidence in support of the effectiveness of the action.Feasibility and costs were independently estimated by L.V.D., B.A. and I.H., based on knowledge about practical application of the actions in UK, Spain and Finland, respectively.Ultimately, an action was regarded as having good potential to be implemented via a voluntary non-monetary approach if it is cheap, relatively easy to apply and supported by some evidence about effectiveness.However, even when the last criterion is not satisfied, the action can still be tried out, and its effectiveness assessed to accumulate evidence.Out of all 119 farmland actions considered, 108 could be assessed for their feasibility for implementation by a farmer in the UK, Spain and Finland.The 11 actions that could not be scored for feasibility were deemed not applicable for implementation via a voluntary approach by private individuals or too difficult to score.Out of the 108 actions with a feasibility score, 95 actions were estimated for their cost in all of the three countries differing in their geography and experience with conservation on farmland.The average estimated cost across all actions was 5.4 work-days equivalent per hectare per year for UK and Spain, and 3.8 for Finland.Note that this is the cost in the targeted area, rather than across the entire farm.Of the 95 actions, 21 actions were evaluated suitable for implementation by a farmer or land owner.Arbitrarily assuming that actions requiring at maximum of two work-days equivalent per hectare per year are sufficiently cheap to be implemented without monetary incentives, a total of 16 to 17 farmland actions in UK, Spain and Finland fit also this requirement.These actions can thus be regarded to be highly feasible and sufficiently cheap for implementation, of course this is assuming the interventions will be carried out in localized patches rather than across an entire farm.Among all the 16 to 17 actions identified as suitable according to the two first criteria, 10 to 11 actions have been assessed for their effectiveness in at least one study; all of these were reported to have a positive impact.Appendix S4 provides the full list of actions with their estimates of feasibility and costs.The actions identified as suitable for a voluntary non-monetary approach are diverse, from simple and commonplace actions, such as providing nesting boxes for birds, to less known ones, such as raising the mowing height on grassland to benefit wildlife.Most of the suitable actions, such as creating open patches or strips in permanent grassland, may benefit a whole community of farmland fauna and flora.Many of the suitable actions identified are exclusive to farmland, such as leaving overwinter stubbles.Yet, several others, such as providing supplementary food or providing short grass for birds, may be applicable in any type of open space, including urban or suburban private and public gardens, parklands, and graveyards.It is important to note that the cost estimations used here refer to a common unit of land of one hectare.This implies that the extent and costs of implementing an action via the voluntary non-monetary approach varies according to the amount of land owned by the private landowner.Although we standardized our estimates for one hectare of land, it will always be up to the owner to ultimately decide how much of land can be enrolled in the action that is not supported by monetary incentives.The ultimate ecological effectiveness will most likely result from the net uptake across the landscape.The topic of spatial pattern of voluntary action, albeit relevant and interesting, requires further study beyond the scope of this work.While it is clear that there is a wealth of actions that could potentially be implemented using a voluntary non-monetary approach, their take-up by individual citizens may be limited by factors other than feasibility and costs.Among these, predominant factors may be lack of awareness of an action, or a lack of encouragement or role models.We suggest that there is great scope for enhancing the take-up of actions that can be implemented via voluntary non-monetary means by using, among others, the theory and operational framework recently formalized around the “nudge” approach.A nudge is defined as a factor that significantly alters the behaviour of people based on characteristics of human nature and psychology.Building upon the theory of “choice architecture”, nudging is a way to influence human choice towards a wealthier and better-quality life style while preserving the freedom of choice of the individual.This approach has received rapid acceptance, e.g. by the UK government, as an effective way to enhance the response of citizens to pay taxes or make better life choices.The application of nudging thus differs from financial or legislative approaches, also informally referred to as ‘shoving’.Several organizations have now discovered and make full use of the great power of nudges such as “default” options and “framing” in the presentation of choices, among others.Typically, if an option is designated as “default” among alternative choices, it will be chosen more often than other options.Likewise, framing is relevant because the way in which an option is stated may strongly influence selection from among choices.Nudging can also be used to spread the application of an action by highlighting its successful implementation among neighbours.For example, the regional forestry centre of North Karelia, eastern Finland, has successfully implemented a voluntary non-monetary approach simply by asking forest owners to retain a small forest buffer around raptor nests that would otherwise be destroyed by logging.Such a successful example could be exported to other regions of Finland, where landowners, at the time of being approached, could be made aware that their peers in North Karelia had very successfully implemented the proposed action.The number of such applications in conservation could become numerous, potentially having big positive impacts over large areas.As an example, if the default was set so that the most easily available longline fishery equipment sold would be design models that reduce seabird by-catch, and if these would be sold along with a best practise guides on how to reduce seabird by-catch, seabird mortality could be measurably reduced.Nevertheless, nudges have rarely been considered in nature conservation.Nudging is only one among several possible alternative models for changing behaviour in order to enhance the take-up of conservation actions via voluntary non-monetary approaches.It is well known that one of the most powerful determinants of behaviour is what is allowed by the physical and social environment.That is, information on what actions can and cannot be implemented is the first step that needs to be considered, and the one that we attempted to address in this study.Ultimately, an interdisciplinary approach could be of utmost importance for the successful implementation of conservation actions on private land using a voluntary non-monetary approach.While private landowners and other citizens will be key actors for implementing conservation, outreach and advocacy programmes implemented by NGOs and other organizations can further increase the take-up of actions on private land.Conservation scientists could be responsible for gathering the necessary data and evaluating the effectiveness of actions, and the results would then be fed back to private landowners via NGOs and other organizations.Although challenging to achieve, such a collaborative effort could have important large-scale benefits for conservation."A successful example of such positive interdisciplinary collaboration between conservation citizens, conservation scientists and NGOs is provided by the French national programme for protecting Montagu's harrier nests in farmlands of France.There, different nest protection interventions are implemented using a voluntary non-monetary approach each year throughout France by volunteer conservationists, nationally coordinated by the LPO.This effort was coupled in more recent years with survey data on breeding parameters of protected nests.The resulting major survey data were used by scientists to evaluate the effectiveness of interventions and their overall impact on the harrier populations.Such scientific feedback is currently being returned by the LPO to the network of volunteer conservationists, which hopefully leads to increased participation in implementation of the most effective actions.Knowledge about the success of this scheme has in turn motivated discussion in neighbouring Spain about how to improve volunteer participation in nest protection there.Another successful example of positive interdisciplinary collaboration between conservation citizens, conservation scientists and local organizations is represented by a voluntary conservation programme for protecting nests of forest hawks under threat from logging in private forests of North Karelia, Finland.There, 97% of private forest owners accepted to voluntarily participate, without any financial incentives, in the programme when asked by a representative of a regional forest management organization.The programme resulted in a very large decrease in nests being lost to logging.Moreover, the small forest buffer retained around the nests was found effective in maintaining nest occupancy by the raptors.Volunteers have also been successfully engaged to restore coastal meadow habitat on islands in Estonia.Actions such as reed and scrub removal, mowing and implementation of grazing, pond restoration and educational activities were implemented there by 200 volunteers.As a result of these efforts, numbers of the natterjack toad increased on one of the islands, and its decline was halted in other two islands, suggesting that the programme was also biologically effective.Similarly, a project was initiated by scientists with the aim to eradicate a harmful invasive species, the American mink in Scotland.A large number of different local stakeholders, when asked, joined the project on a voluntary basis whereby no financial incentives were provided.The created coalition of volunteers, trained to detect and trap mink, successfully eradicated the invasive species from large areas under the scope of the programme.We contend that a voluntary non-monetary approach may represent a missed opportunity that the conservation community, including researchers and conservation managers, should both address.We show that there are examples where this approach has been successful.We show that while diverse actions are potentially suitable for implementation through a voluntary non-monetary approach, the approach and its scale and ecological impact have been mostly neglected by conservation scientists.The work of conservation scientists is needed for evaluating conservation interventions and their societal acceptability, and for providing lists and descriptions of actions that are feasibly applicable in different environments.Ultimately, it is our hope that this study will represent a clarion call for conservation scientists to clearly recognize the value of voluntary non-monetary approaches, their characteristics, and their potential for facilitating conservation on private land.
The voluntary non-monetary approach to conservation refers to actions that citizens or organizations could voluntarily implement in their area of influence without the incentive of monetary compensations. To be effectively implemented by untrained actors, actions should be clearly defined, straightforward to implement and not require specific scientific knowledge. The costs of actions should also be sufficiently affordable to be widely applied without monetary incentives. A voluntary non-monetary approach has so far not been clearly described as a distinct group of tools for nature conservation. Here we review the scarce scientific literature on the topic. To illustrate the applicability of a voluntary non-monetary approach to conservation, we then investigate its potential for farmland conservation. We considered a list of 119 actions available from "conservation-evidence", a source of systematically collected evidence on effectiveness of conservation actions. Among 119 actions, 95 could be scored for feasibility of implementation, costs, and existence of evidence in UK, Spain and Finland. Sixteen to seventeen actions were potentially suitable for implementation by a voluntary non-monetary approach. This implies that the voluntary non-monetary approach could be widely applicable across many countries and environments. It is our hope that this study will represent a clarion call for conservation scientists to clearly recognize the voluntary non-monetary approach, its characteristics, and its potential for addressing conservation issues on private land. Adoption of such voluntary measures may be more dependent on encouragement ('nudging') than on the usual coercive or financial emphasis ('shoving').
450
Np(V) sorption and solubility in high pH calcite systems
Government policy in many countries is that disposal of higher activity radioactive wastes, including both intermediate and high level wastes, will occur in underground Geological Disposal Facilities at a depth of 200 to 1000 m.In the UK, a significant fraction of the Intermediate Level Waste has already been grouted and under the current generic model for ILW disposal, the GDF may be backfilled with a cement based material.Upon resaturation of the sub-surface, the interaction of the cementitious materials with groundwater will create a region of high pH known as the Chemically Disturbed Zone.This CDZ is expected to remain alkaline over extended periods.Initially, K/Na hydroxide dissolution will generate a ~pH 13 leachate which will gradually reduce to ~pH 12.5 due to equilibration with portlandite2).Finally, the pH of the leachate will fall to ~10.5 as the system shifts to being buffered by calcium-silicate-hydrate gel.The high pH is intended to produce conditions that limit radionuclide mobility.Therefore, investigation of radionuclide behaviour in the alkaline conditions generated by cementitious materials is of wide relevance.Calcite has been shown to sequester actinides effectively including UO22+, PuO22+, NpO2+, Th4+ and Am3+ through adsorption/incorporation/precipitation reactions.Calcite is a common mineral in many host geologies considered for geological disposal and is likely to increase in concentration in the engineered subsurface as the GDF environment evolves due to cement carbonation reactions.Indeed, calcite formation is expected to be one of the main controls on carbonate concentration in a cementitious GDF as, over time, the portlandite component will be converted into calcite.Therefore, calcite will be an important reactive mineral phase for radionuclides within the GDF environment.237Np is a transuranic isotope and is one of the most radiologically significant elements in the disposal of radioactive waste because of its long half-life of 2.13 × 106 years.Neptunium can exist in a range of oxidation states, and in ambient oxidising environments Np will dominate as the dioxygenyl, NpO2+ species, one of the most mobile of the actinides.Because of the elevated solubility of Np, it is worthy of study as an end member for retardation in a GDF and provides insight into An behaviour in environmental systems.Furthermore, oxidising conditions are typically expected to persist after the closure of the cementitious GDF for several hundred years.Ultimately, it is expected that reducing conditions will develop although it is acknowledged that wastes are heterogeneous and, in some cases have oxidising components.Furthermore, some GDF designs consider the potential for reoxidation of the GDF due to the influx of oxygenated glacial groundwaters, which may occur over long timescales.Therefore, it is important to understand Np) behaviour.A small number of studies have examined Np solubility over a range of environmental conditions, such as pH, concentration, solution composition and ionic strength.In carbonate free environments at pH 10–12, NpO2OH and Np2O5 are expected to precipitate.Over time Np solubility is expected to fall as the initial NpO2OH solid phase undergoes a slow transition to a less soluble NpO2OH phase over a period of several months.In addition,NpO2CO3·xH2O and3NpO22 phases have been shown to precipitate in systems containing high carbonate concentrations.Recent work has used X-ray diffraction, X-ray Absorption Near-Edge Structure spectroscopy, and near infrared to probe Np solubility in a range of pH neutral to alkaline CaCl2 systems.This demonstrated that in undersaturated systems NpO2 underwent transformation into Np-Ca-OH phases such as CaNpO22.6Cl0.4·2H2O as a function of pH and CaCl2 concentration.Subsequent experiments where Np was added to the model system in an acidified aliquot at higher concentrations suggested that a Ca0.5NpO22·1.3H2O solid may be controlling solubility.Other work has focused on the solubility of Np in high pH, high Ca, and highly oxidising environments.The work demonstrated that under these extreme conditions Np can form stable calcium neptunate phasesO3+x) with Ca:Np ratios of between 0.6 and 1.6 and with an observed solubility of approximately 10−6 to 10−6.5 M.In work examining Np interactions with calcite, the speciation of Np on coprecipitation with calcite at pH 7.8–12.8 was studied using Extended X-ray Absorption Fine Structure.The authors fitted their EXAFS data using four shells: 2.1 oxygen atoms at 1.86 Å to account for the axial oxygen in the neptunyl unit; 3.9 oxygen atoms at 2.40 Å to represent the closest oxygen in the coordinated carbonate; and, 4.9 carbon and 2.1 oxygen atoms at 3.10 and 3.40 Å, respectively, to account for the remaining atoms in the carbonate ligand.The Np-Oeq distance was longer than the Ca-Oeq distance in calcite and this was interpreted as Np coordination by four monodentate carbonate ions.Furthermore, the authors suggested that low Debye-Waller factors for the axial and equatorial oxygens indicated low structural disorder and that NpO2+ was incorporated into the calcite structure.Further work proposed that NpO2+ substitutes for Ca2+ in the calcite structure, where the axial neptunyl oxygens substitute for two adjacent CO32− ions.NpO2+ sorption to calcite was also studied by the same group and was found to reach a maximum at pH 8.3 and at low Np concentrations.The pH dependence of sorption was much stronger at higher Np concentrations.Kinetic experiments at pH 8.3 showed slow adsorption with experiments not at steady state after four weeks of reaction.This was attributed to the structural incorporation of Np in to the calcite as the solid underwent gradual dissolution/recrystallization.There are several studies that examine Np solubility in the literature.However, only a few of these have focused on the high pH systems relevant to intermediate level waste disposal and only one was concerned with the influence of Ca2+ on Np solubility.Additionally, there are only a few studies examining the interaction of Np with calcite surfaces, none of which examine Np-calcite interactions in high pH environments.Here, we studied Np solubility in systems designed to be directly relevant to a cementitious disposal facility.We describe experiments carried out in ‘young’ and ‘old’ synthetic cement leachates as well as experiments to investigate further the role of Ca2+ in Np solubility under high pH conditions.Throughout we have used a combination of batch experiments, geochemical modelling, and X-ray Absorption Spectroscopy to define Np speciation, solubility and sorption behaviour.Overall these data provide new insights into the reactivity of Np with calcite under the high pH conditions expected in and around a cementitious GDF.237Np is a highly radiotoxic α-emitter with β and γ-emitting daughter isotopes.The possession and use of radioactive materials is subject to statutory control.A high purity chemically precipitated calcite powder was used throughout and was characterised using powder X-ray diffraction.Prior to use, the calcite was passed through a <63 μm sieve, and the surface area of this sieved fraction was measured as 0.26 ± 0.01 m2 g−1 using B.E.T analysis.Additionally, past work on the material using SEM showed crystals with the rhombohedral morphology typical of calcite and described a U-rich coating of 10–100 nm on the surface of the U reacted calcite using FIB TEM analysis.For all experiments, 237Np was prepared chemically and its oxidation state confirmed UV–vis spectrometry immediately prior to dilution into a 0.01 M HCl sub-stock.In order to reflect the chemical conditions expected during the re-saturation of cement, two solutions were used: Old Cement Leachate and Young Cement Leachate.The OCL solution consisted of a pH 10.5 Ca2 solution, whereas YCL was pH 13.3 system containing: 5.2 g L−1 KOH; 3.8 g L−1 NaOH; and 0.01 g L−1 Ca2.Leachate pH was measured with a pH probe routinely calibrated with pH 7, 9 and 11 calibration buffers and checked against a pH 13 buffer solution.Selected experiments required YCL and OCL solutions which had been pre-equilibrated with calcite.Here, calcite was reacted with the solutions for 4 days before filtration and use in experiments.All sample manipulation was conducted under a CO2 free atmosphere to ensure that carbonate concentrations remained static throughout the experiment.This was required as the slow kinetics of CO2 dissolution coupled with a greatly increased capacity for CO2 dissolution at high pH would otherwise lead to an uncontrolled increase in carbonate concentrations throughout the experiment.Sorption experiments were spiked to a final Np concentration between 1.62 × 10−3 μM and 1.62 μM and included either 100 g L−1, 20 g L−1, or 2 g L−1 calcite in an experimental volume of 50 mL.In the Np solubility experiments, the Np concentration was either 4.22 μM or 42.2 μM in a volume of 10 mL.In experiments targeting the effect of Ca2+ on solubility in the YCL system, the calcium concentration was varied over a range of concentrations by addition of CaCl2.All reactions were carried out triplicate in polypropylene centrifuge tubes and allowed to equilibrate for 48 h before the introduction of the Np spike.For all Np analyses, the supernatant was separated from the experiments by centrifugation at 5000g for 5 min.In low concentration Np experiments, total 237Np concentrations were determined using Inductively Coupled Plasma Mass Spectrometry on an Agilent 7500cx instrument.The ICP-MS samples were prepared by dilution in 2% HNO3 solution.All ICP-MS measurements were calibrated and corrected for instrumental drift with a 10 μg L−1 232Th internal standard.For higher activity experiments, Np activity was determined using Liquid Scintillation Counting in a Quantulus instrument.LSC samples were prepared using 1 mL of sample with scintillant with appropriate matrix matched standards.The precipitate from the 42.2 μM Np-YCL solubility experiment was amenable to XAS analysis at the Np LIII edge and was recovered for this purpose.The sample was isolated by centrifugation, and the resultant pellet was mounted in a CO2 free environment in a triple contained sample holder and stored at −80 °C prior to analysis.EXAFS spectra were collected at room temperature at the Np LIII-edge on beamline B18 at Diamond Light Source using a solid-state Ge detector, focusing optics, and a water-cooled Si-111 double crystal monochromator.The data were calibrated in energy space using an in-line Y foil reference standard.Background processing was performed using Athena, and EXAFS modelling was carried out using Artemis in conjunction with FEFF 8.5 L.Throughout, model parameterisation used no more than two thirds of the total number of available independent data points.In all calculations the three multiple scattering paths associated with the axial oxygen atoms in the neptunyl unit were included.In the fitting of the Np–Oaxial multiple scattering paths, the parameters for distance and the Debye-Waller factor were fixed to double the values for the Np–Oaxial single scattering path, except for the rattle multiple scattering path where the Debye-Waller factor was fixed to quadruple the value of the single scattering path.All speciation and saturation thermodynamic calculations were performed using the United States Geological Survey thermodynamic speciation code PHREEQC Interactive using the ANDRA SIT database.PHREEQC input files are available in the SI.A series of OCL and YCL 42.2 μM NpO2+ solutions were allowed to equilibrate with calcite over 550 h under CO2 free conditions.Thermodynamic calculations using PHREEQC predicted that at 42.0 μM Np, these systems would be supersaturated with respect to several Np solid phases, specifically: Np2O5; NpO2OH; and NpO2OH.Subsequent solution analysis of the centrifuged supernatant from these experiments, post-reaction, indicated removal of Np from solution had taken place.Here, the experimental concentrations of NpO2+ in solution at apparent equilibrium were determined to be 9.7 ± 0.6 μM and 84 ± 4 nM for the OCL and YCL systems, respectively.A comparison of these post-reaction data with the predicted solubility of key Np solid phases suggested that NpO2OH was the controlling phase in the OCL systems.By contrast, the post-reaction, Np concentrations measured in calcite equilibrated YCL were significantly lower than would be expected if NpO2OH solid phases were controlling solubility.Furthermore, the observed Np solubility in the YCL system post reaction was significantly higher than expected if the solution were in equilibrium with Np2O5, and it was unlikely that a crystalline solid would form in the timeframe of this experiment.This suggested that a Np phase which was not in the thermodynamic database was controlling Np solubility.Under similar hyperalkaline conditions, calcium has a controlling influence on U solubility with U precipitating in calcium containing phases, such as becquerelite6O46·8) and calcium uranate.Given this, we also explored Np solubility as a function of calcium concentration in a targeted set of experiments.Here, a series of carbonate free, pH 13.3 NaOH solutions were created with a range of Ca2+ concentrations which were predicted to be undersaturated with respect to portlandite and either 4.22 or 42.2 μM NpO2+.After 24 h of reaction, analysis of the supernatant, indicated NpO2+ removal had occurred across all systems.The Np concentrations after 24 h in the 1.4 × 10−3 M Ca2+ solutions were 0.18 ± 0.02 and 1.09 ± 0.01 μM for the 4.22 and 42.2 μM Np experiments, respectively.This was in contrast to calcium free experiments in which Np concentrations were 3.49 ± 0.28 μM and 5.60 ± 0.24 μM for the 4.22 and 42.2 μM Np experiments, respectively.The clear relationship between increased calcium concentration and reduced Np solubility suggested that a calcium containing Np solid was controlling solubility in these systems.Therefore, the Ca2+:Np ratio is the gradient of a Log-Log plot of vs .Analysis of Ca2+ in the post reaction solutions was not possible due to radiological considerations.In the absence of experimentally derived Ca2+ concentrations, an iterative modelling procedure was used to correct for Ca2+ removal in the precipitate.An initial gradient was obtained from the data using the total Ca2+ concentration and this was used to calculate the amount of Ca2+ removed from the solution.A new gradient was then calculated which took the Ca2+ removal into account and this was then used to recalculate the Ca2+ concentration.This procedure was repeated until the Ca2+ values used in the plot were consistent with the gradient.For most data points, the Ca2+ was present in higher concentrations than the Np, and so there was relatively little difference between the initial and final plots.The final gradients were calculated as −0.40 ± 0.03 and −0.24 ± 0.03 for the 4.22 μM and 42.2 μM data, respectively.These values suggest that there was an average of 0.40 ± 0.03 and 0.24 ± 0.03 calcium atoms per neptunium atom in the bulk precipitate for the 4.22 and 42.2 μM Np systems, respectively.There are two possible explanations for this statistically significant decrease in the Ca2+:Np ratios as the Np concentration increased.The first is that Np phases may have precipitated.For example, NpO2 may also be precipitating in addition to any calcium‑neptunium phase.Alternatively, solid solutions of Np and Ca may be forming which would allow the Ca2+:Np ratio to change as a function of Np concentration.Fallhauer et al. studied the transformation of NpO2 in concentrated brine systems and determined three previously unknown Ca containing Np phases: CaNpO22.6Cl0.4·2H2O; Ca0.5NpO22.1.3; and Ca0.5NpO22.The Ca:Np ratios of 0.5–1:1 in their reported phases are high compared to those calculated for our experiments.This is presumably since geochemical conditions studied in Fallhauer et al. were dominated by higher Ca2+ concentrations, lower pH values, and much higher Np concentrations compared to the current work.Despite this, the report of these new Ca-Np phases certainly adds support to the postulated controlling nature of Ca on Np solubility in the current study.A sample of the Np containing solid which precipitated from the 42.0 μM Np YCL solubility experiment was collected and analysed using XAS on B18 of the Diamond Light Source to provide further insight into the Np speciation.Comparison of the XANES spectra with Np and Np standards clearly confirmed that the sample was dominated by Np.Further comparison of the XANES spectrum for the sample and Np standards in both dioxygenyl and neptunate coordination suggested the sample was dominated by a dioxygenyl coordination environment.This was consistent with the EXAFS modelling which successfully fitted two axial oxygens at 1.89 Å.This distance is consistent with the dioxygenyl bond lengths for Np species of ~1.9 Å, and in contrast to a neptunate coordination environment where the axial oxygen distances lengthen to ~2.1 Å.The EXAFS fitting was further informed by relevant literature related to high pH Np precipitates.Reflecting this, an additional oxygen shell was fitted to the EXAFS data.A successful fit was achieved with one shell of equatorial oxygen containing 4 atoms at 2.41 Å.The lack of evidence for any additional Np backscatterers at longer distances in the fit is consistent with the formation of a poorly ordered a Np-hydroxide phase, similar to that of Bots et al. which was observed under similar experimental conditions.The sorption of Np to calcite surfaces was investigated over a range of Np concentrations and calcite concentrations in both OCL and YCL solutions using batch sorption experiments.PHREEQC was used to predict Np speciation in OCL and YCL solutions with CO32− concentrations controlled by equilibrium with calcite.These calculations suggested that in the OCL systems there was sufficient calcite dissolution to allow the NpO2− species to dominate speciation with free NpO2+ and neutral NpO2 accounting for the remainder.In contrast, in the calcite equilibrated YCL system the predicted speciation was dominated by a single hydroxide species2−; 88.5%).In this system, the free NpO2+ ion was predicted to account for only 0.09% of Np aqueous speciation.It is important to note that for Np speciation at high pH, where data are sparse, thermodynamic databases are likely to be incomplete and calculations provide insights into the likely speciation rather than defining it absolutely.Overall, the predicted speciation in the calcite equilibrated OCL suggests a significant contribution of neutral and cationic aqueous species; NpO2+) and thus outer sphere complexation seems possible.By contrast, in YCL systems the proportion of NpO2+ is predicted to be much lower.Given that calcite surfaces carry a significant negative charge at pH 13, this suggests outer sphere complexation is unlikely.Despite this extreme pH, the unfavourable electrostatic environment may be overcome by strong chemical interactions and therefore the formation of inner sphere complexes remains possible.In all sorption experiments Np was removed from solution on reaction with calcite.In the YCL systems, removal was observed at all Np sand calcite concentrations.In YCL systems with an initial Np concentration of 1.62 × 10−3 μM, Np solution concentrations at apparent equilibrium were: 42 ± 1.6%; 66 ± 0.7%; and 84 ± 0.8% of the initial spike with solid to solution ratios of 100, 20, and 2 g L−1, respectively.Whereas, with an initial Np concentration of 0.16 μM the final Np concentrations in solution were: 6 ± 1.0%; 14 ± 1.7%; and 40 ± 2.9%.Interestingly, the Np concentrations showed a clear dependence on the solid to solution ratio in both systems, which suggested that the calcite surface must be involved in Np removal from solution.However, the relative proportion of Np removed from solution increased in the 0.16 μM system compared to the 1.62 × 10−3 μM system.For example, with a solid to solution ratio of 100 g L−1, 58% and 94% of all Np was removed from solution in the 1.62 × 10−3 μM and 0.16 μM systems, respectively.The amount of Np sequestered in a system via surface complexation will be determined by the product of the number of available binding sites and the equilibrium constant.Consequently, in a system where Np removal is controlled by surface complexation, sorbed Np as a proportion of total Np cannot increase between two systems with an increasing Np concentration.Therefore, the experimental data from the YCL calcite sorption experiments cannot be explained by formation of a Np-calcite surface complex alone and implies that an additional process, such as precipitation, is also occurring.Indeed, precipitation was observed in control experiments with calcite equilibrated YCL solution where the Np concentration in solution decreased by 60 ± 3.0% and 84 ± 1.6% with initial Np concentrations of 0.16 and 1.62 μM, respectively confirming oversaturation.The OCL systems also showed significant removal of Np from solution.At the lowest Np concentration, sorption to calcite was measured as 94 ± 1.2%, 87 ± 1.0% and 50 ± 3.0% of the original concentration.Np removal increased with solid:solution ratio and thus surface area.However, in contrast to the YCL systems, there was no increase in the proportion of Np removal with increasing Np concentrations.Overall, the OCL data showed increased sorption with an increased calcite surface area, with no additional removal processes such as precipitation, which is consistent with the formation of a calcite-Np surface complex dominating these experiments.Sorption isotherms for both YCL and OCL systems at apparent equilibrium are shown in Fig. 4.Both isotherms show a clear linear relationship with gradients of 1.69 ± 0.20 and 0.90 ± 0.12, respectively.In the case of surface complexation, the gradient is indicative of the number of Np atoms per surface complex–).Therefore, in the OCL systems the data are consistent with one Np atom per surface complex.The formation of a Np monodentate inner-sphere complex is consistent with the EXAFS data presented by Heberling et al. showing surface complexation in Np-calcite systems at pH 6.0–9.4.The same treatment of the YCL data would suggest 1.69 ± 0.20 Np atoms per surface complex.However, this is unphysical as for a surface complexation 1 or fewer metal atoms per binding site are expected.In the YCL system removal still partially depends on the mass of calcite.This suggests that the surface is involved in controlling Np removal.Indeed, the surface could be involved by providing nucleation centres for Np-solid growth, by formation of a discrete Np phase bonded to the surface, or a combination of both surface complexation and precipitation.Surface precipitation would also be consistent with the observed increase in sorption with increasing Np concentration in the batch experiments.In fact, surface mediated precipitation reactions have been reported for calcite surfaces with U and similar reactions could be relevant to Np.The kinetics of Np sorption to the calcite surface were also examined.In both the YCL and OCL systems, several weeks were required for apparent equilibrium to be reached.Overall, there was an initial rapid increase in the amount of Np sorbed over the first 24 h, followed by a slower removal from solution.The kinetic data from the OCL were then modelled with both first and second order rate equations.Fitting the data assuming a first order reaction provided a good fit to most systems, with calculated first order rate constants of between 4 × 10−4 and 6 × 10−3 h−1.However, some experiments were not well fitted assuming first order kinetics.Therefore, the data were also fitted assuming a second order reaction.This approach proved more successful in the 100 g L−1 systems and generally improved the quality of all fits with rate constants of between 6 × 10−4 to 1 × 10−2 M−1 h−1.While a surface complexation reaction would exhibit first order kinetics, other processes such as colloid aggregation and/or surface precipitation may be second order.It was however impossible to determine conclusively from the data if Np removal was controlled by a first or second order relationship.Slow sorption kinetics, as seen in these experiments, are unusual for surface complexation to mineral surfaces for labile actinide ions such as NpO2+, and removal is typically expected to occur within 24–48 h.Work by Heberling et al. observed slow Np removal in their circumneutral pH calcite experiments which they attributed to recrystallization of the calcite and incorporation of Np.Another plausible explanation for the slow kinetics could be the formation of a Np-colloid.Work by Smith et al., 2015 and Bots et al., 2014 has demonstrated that intrinsic U-colloids can form in cement leachate systems parallel to those used in this study.In the current work, no intrinsic Np-colloid was identified due to the challenges of experimental work with this radiotoxic, transuranic element.Despite this, the parallel with U behaviour under identical conditions certainly suggest Np colloid formation might be possible, and thus either recrystallisation and/or Np colloid formation could account for the slow kinetics.Np solubility in calcite equilibrated YCL and OCL solutions and in the presence of variable calcite concentrations has been investigated using a combination of solubility experiments, thermodynamic calculations and XAS analyses.Batch experiments indicated that after 600 h, the concentrations of Np were 9.68 μM and 0.084 μM for the calcite equilibrated OCL and YCL systems, respectively.For OCL experiments, data were consistent with thermodynamic equilibrium with a NpO2 solid phase.By contrast, the observed solubility in the calcite equilibrated YCL system was much lower than predicted if NpO2 were the limiting phase.In carbonate free, pH 13.3 NaOH systems, there was a linear relationship between the observed Np solubility and calcium concentration suggesting an unidentified Np/Ca phase was an important solubility control and similar to recently reported Np-Ca-OH solid phases.These experiments indicated that the precipitate had an average of 0.40 ± 0.03 and 0.24 ± 0.03 Ca2+ atoms per Np when the initial Np concentration was 4.22 or 42.2 μM Np, respectively.XAS data on a sample of the precipitate isolated from the 42.2 μM Np YCL solubility experiment confirmed a dioxygenyl type coordination environment dominated in the precipitate, similar to previous studies.The current data are consistent with the formation of a calcium containing neptunyl solid as the solubility limiting phase.Np sorption to calcite in YCL and OCL solutions was studied using batch experiments.Significant removal from solution was observed across all Np concentrations and solid to solution ratios in both the YCL and OCL systems.In both systems, Np removal increased with calcite surface area, suggesting that the surface played a role in sorption.In the YCL systems the absorption isotherm was inconsistent with surface complexation alone but could be explained if surface induced precipitation played a role.In the OCL systems, the absorption isotherm was consistent with formation of a surface complex.The kinetics of Np removal in both OCL and YCL systems were slow, taking several weeks to reach equilibrium and with the data consistent with either first or second order reactions.The cause of these slow kinetics could be due to the slow recrystallisation of the calcite surface and incorporation of Np.In addition, the unconfirmed presence of Np colloids, similar to those observed for U in parallel conditions to the current work, could offer an additional explanation for the slow reaction with the surface.Critically, our data suggest that in hyperalkaline cementitious systems, Np solubility is controlled by a previously unidentified Ca2+-Np phase.This suggests that under high pH conditions, Np solubility will be lower than predicted by the current thermodynamic databases and highlights the importance of additional Np solid phase data in these systems.In a young GDF, dominated by extreme pH leachates > pH 13, when in contact with calcite solids, Np removal from solution seems likely to be controlled either by a combination of precipitation/surface complexation or by surface mediated precipitation.However, in an aged geological disposal facility where pH is moderated to less extreme conditions, the primary Np removal mechanism to calcite surfaces is likely to be via surface complexation.
Np(V) behaviour in alkaline, calcite containing systems was studied over a range of neptunium concentrations (1.62 × 10−3 μM–1.62 μM) in two synthetic, high pH, cement leachates under a CO2 controlled atmosphere. The cement leachates were representative of conditions expected in an older (pH 10.5, Ca2+) and younger (pH 13.3, Na+, K+, Ca2+) cementitious geological disposal facility. These systems were studied using a combination of batch sorption and solubility experiments, X-ray absorption spectroscopy, and geochemical modelling to describe Np behaviour. Np(V) solubility in calcite equilibrated old and young cement leachates (OCL and YCL) was 9.7 and 0.084 μM, respectively. In the OCL system, this was consistent with a Np(V)O2OH(am) phase controlling solubility. However, this phase did not explain the very low Np(V) solubility observed in the YCL system. This inconsistency was explored further with a range of pH 13.3 solubility experiments with and variable Ca2+(aq) concentrations. These experiments showed that at pH 13.3, Np(V) solubility decreased with increasing Ca2+ concentration confirming that Ca2+ was a critical control on Np solubility in the YCL systems. X-ray absorption near-edge structure spectroscopy on the precipitate from the 42.2 μM Np(V) experiment confirmed that a Np(V) dioxygenyl species was dominant. This was supported by both geochemical and extended X-ray absorption fine structure data, which suggested a calcium containing Np(V) hydroxide phase was controlling solubility. In YCL systems, sorption of Np(V) to calcite was observed across a range of Np concentrations and solid to solution ratios. A combination of both surface complexation and/or precipitation was likely responsible for the observed Np(V) reaction with calcite in these systems. In the OCL sorption experiments, Np(V) sorption to calcite across a range of Np concentrations was dependent on the solid to solution ratio which is consistent with the formation of a mono-nuclear surface complex. All systems demonstrated slow sorption kinetics, with reaction times of weeks needed to reach apparent equilibrium. This could be explained by slow recrystallisation of the calcite surface and/or the presence of Np(V) colloidal species. Overall, these data provide valuable new insights into Np(V) and actinide(V) behaviour in alkaline conditions of relevance to the disposal of intermediate level radioactive wastes.
451
Do thin, overweight and obese children have poorer development than their healthy-weight peers at the start of school? Findings from a South Australian data linkage study
The transition into primary school is considered to be an important period in the life course."A child's ability to fully benefit from, and participate in, school life is dependent upon their physical, cognitive, and socio-emotional development.Every child has the right to be physically healthy, including being free from illness and possessing the fine and gross motor skills to allow them to engage in classroom activities.Other essential foundations for learning include cognitive abilities and language skills.Socio-emotional behaviors including emotional regulation, attention, social relationships, and awareness, as well as attitudes are supportive of learning.These aspects of child development have been linked to later school achievement and subsequently to health, well-being, and social circumstances in adulthood.There is recognition of the potential for early child development to improve health and well-being, and supporting early child development is a priority of governments around the globe."This has prompted the design of schemes such as the Australian Early Development Census, which involves monitoring aspects of early child development that are relevant for understanding children's preparedness to learn at school and is indicative of later school performance.All aspects of child development, including cognition, socio-emotional well-being, and motor skills, are dependent upon physical and nutritional well-being.The interdependence between different aspects of health and well-being is increasingly acknowledged by researchers and policy makers alike.Yet in developed countries, little is known about the general development of children who are not of a healthy body mass index.This is despite dramatic increases in rates of overweight and obesity; and a wealth of evidence from low- to middle-income countries on the detrimental impacts of impeded growth throughout infancy and childhood.In recent decades, childhood overweight and obesity have increased dramatically in Australia and in other countries, with some signs of levelling off.Overweight and obesity have been associated with poorer outcomes in later childhood, including reduced self-esteem and psychosocial well-being, and the development of cardiovascular risk factors and metabolic disorders.In adulthood, overweight and obesity have been linked to a range of negative outcomes, including cancer, cardiovascular disease, and reduced healthy life expectancy.There is a paucity of research examining the association between overweight, obesity, and development in young children, and of the studies that are available, findings have been mixed.There is some evidence that obese children have poorer socio-emotional well-being and behavior, cognition and language, and academic scores.Obese children have also been shown to be at increased risk of asthma or wheezing, poor scores on global measures of health, lower daily activity skills, and fine and gross motor abilities.In many cases, these differences are small, and several studies show inconsistencies across outcomes, genders, or age groups, or that the relationships are confounded by socio-economic circumstances.It has been postulated that null findings may be due to some studies examining overweight and obese children as one group.It is possible that any effect on child development may be more evident as the extent to which a child is overweight increases, and consequently, there is a need to examine overweight and obesity separately.Recently, age- and gender-adjusted BMI cut-offs for thinness were created by Cole, Flegal, Nicholls, and Jackson, to complement the International Obesity Taskforce cut-offs for childhood overweight and obesity."In high-income countries, much less attention has been paid to the determinants and consequences of childhood thinness than overweight and obesity, even though there is evidence that thinness remains a public health issue.The majority of evidence refers to the impact of more chronic measures of impeded growth in early childhood on development.Nevertheless, it is thought that moderate or mild degrees of thinness can impede development, including language, intelligence, attention, reasoning, and visuospatial functioning.There is a dearth of research examining the association between thinness and child development in high-income countries, particularly in preschool children, and using measures of development that capture the preparedness of children to fully benefit from and participate in school life.The limited evidence base indicates that thinness is associated with worse academic scores, poorer global health, higher special health care needs, and possibly higher rates of infection and conditions which limit daily functioning.On the other hand, studies have found that children who are thin are no different from healthy-weight children in terms of their behavior and socio-emotional well-being, susceptibility to respiratory infections, number of visits to general practitioners, school absenteeism due to illness, and motor skills.Indeed, one study found that thin children had a reduced risk of asthma, and another that thin children were less likely to display behavioral problems, when compared with healthy-weight children.However, various definitions of thinness were used in these studies, limiting comparability.BMI is a widely acknowledged marker of malnutrition in population research.For example, thinness can occur when children do not have sufficient energy and protein; and protein-energy malnutrition often goes hand-in-hand with other nutritional problems, such as deficiencies in micronutrients.At the other end of the BMI spectrum, overweight and obesity reflect an excess of the energy needed for childhood growth and activity.Despite overconsumption of energy, obese individuals may still be lacking in some macro-nutrients and also micro-nutrients that are needed for healthy development.While our understanding of the relationship between nutrition and child development requires further advancement, there is some evidence that children who are deficient in macro-nutrients and micro-nutrients, have poorer cognitive, behavioral, and motor development, as well as physical illness.For example, children who are iron deficient display higher rates of inhibition and clinginess to their caregiver.As some nutrients have been linked to child development, a number of studies have sought to examine the association between general dietary patterns and child development, to allow for the fact that individuals consume combinations of foods and nutrients as part of an overall diet.These studies indicated that healthier dietary patterns may have small benefits to intelligence and cognition.Attention has also been paid to the consumption of breakfast and whether it has benefits for cognition and behavior; intuitively, the consumption of breakfast after a period of overnight fasting should be beneficial for physical and mental well-being, although the majority of evidence only points toward benefits in adolescents or malnourished children.Nevertheless, children living in families that report that they sometimes or often do not get enough food to eat have been shown to have worse academic outcomes, and there are anecdotal reports of children coming to school hungry in high-income countries such as Australia, which, in turn, affects short-term concentration levels and the ability to learn.The aim of this study was to investigate whether children who do not have a healthy BMI are more likely to be developmentally vulnerable on a global measure of child development at the start of school.We investigated five important developmental domains, and examined categories of weight status spanning the full spectrum of BMI, as we anticipated that the effects of low and high BMI, on different aspects of child development, would vary."We did this using four routinely collected government data sources in South Australia, which offer a unique opportunity to examine associations between BMI status and children's development, and adjust for a wide range of potential confounding variables relating to the child's demographic, socio-economic, and birth characteristics.The study sample comprised children who took part in the 2009 AEDC in the first year of school and received a preschool health check at which height and weight were collected.When possible, potential confounding factors were obtained from these two datasets.Additional potential confounding variables were obtained from two further datasets: perinatal hospital records and the student school enrollment census.The measures used in the analysis, and the datasets they were derived from, are now described.The census of child development, known as the AEDC, is conducted by the Australian federal government every three years.Alongside questions regarding child demographics, and teacher and classroom characteristics, the AEDC includes a validated questionnaire, which was adapted to the Australian context from the Canadian Early Development Instrument, and is designed to capture a holistic measure of child development.The AEDC has equivalent psychometric properties compared to the Canada and United States EDI measures, demonstrating good content, construct, and predictive validity, and excellent internal reliability.The questionnaire is filled in by school teachers for children attending their first year of school.It is made up of 95 questions, which are used to create scores across five developmental domains: Physical Health and Wellbeing, Social Competence, Emotional Maturity, Language and Cognitive Skills, and Communication Skills and General Knowledge.Each of the domains is made up of several subdomains.The scores from each of the domains are adjusted for age and, according to national reporting practices, children with scores below the 10th percentile on each domain are categorized as being developmentally vulnerable.An additional global measure of child development is also created, which captures whether a child is vulnerable on one or more of the domains.All domains of the AEDC, and the global measure, demonstrate internal consistency and predictive validity for later school achievement in Australia.In this analysis, we used data from the 2009 AEDC.We examined vulnerability on each of the five domains, and also the global measure of developmental vulnerability."In South Australia, a preschool health check is freely available to all children prior to entering school. "Children's height, weight, hearing, vision, and oral health are assessed by community health nurses, at local health clinics or the child's preschool. "We used children's height and weight data to estimate BMI/height2). "BMIs were transformed to sex- and age-specific z-scores using the World Health Organization's reference data for child growth and the zanthro program for Stata.In accordance with the WHO reference data, children with height, weight or BMIs that were more extreme than five standard deviations above or below the mean were considered implausible and not included in the analysis.Children were categorized as thin, healthy, overweight, or obese using the International Obesity Taskforce age- and sex-specific cut-offs."Potential confounding factors were selected a priori based on a causal model of children's weight status and child development, using directed acyclic graphs.Variables representing common causes of the exposure and outcome were used to account for potential confounding.Where possible, confounding variables were obtained from the preschool health check and AEDC databases; additional variables were drawn from the perinatal and student school enrollment census databases, as now outlined.Indigenous status of the child was obtained from the AEDC, as was area disadvantage.Remote/non-remote area of residence was examined using the Australian Remoteness Index for Areas, which was obtained from the preschool health check."A number of characteristics relating to the child and their birth were obtained from perinatal hospital records: gender, maternal age at child's birth, maternal smoking during second half of pregnancy, plurality, gestational age at birth, birth-weight-for-gestational age z-score), and whether the mother experienced any complications during the perinatal period and diabetes, which were defined using standard criteria.Collection of these data by midwives and neonatal nurses using a Supplementary Birth Record form and a companion guide are mandatory for all live births in South Australia.They are collated by the Perinatal Outcomes Unit, South Australian Department of Health, and have been validated against an audit of medical records.A number of additional socio-economic variables were obtained from the school enrollment census: parental education, parental employment, and eligibility for a school card."This information was derived from a form which must be completed by the children's parents/guardians at the start of school and again if the child changes school. "Schools are expected to perform validation checks and report data to the state government Department for Education and Child Development annually.In 2009, 64% of children attended government schools in South Australia and 36% attended private or were home schooled.Data from the four government datasets were linked by an independent agency to maintain confidentiality.Data custodians from the government departments provided basic identifiers such as name, age, gender, and address, to the data linkage agency, who then used a probabilistic linkage algorithm to match records from different datasets.To minimize mismatches of individuals across the datasets, the data linkage agency undertook a set of quality assurance checks and clerical review.As unique identification numbers are not used in Australia, linkages are necessarily probabilistic and are based on key demographic information and therefore a small degree of error is to be expected.Linkage errors can occur from missed links or incorrect positive links.Calculation of false linkages has not yet been undertaken in South Australia; however, Western Australia and New South Wales use similar systems and estimate false positive linkage errors of approximately 0.1% and 0.3%, respectively.In less than 0.5% of cases, the information in the datasets did not uniquely identify each case, resulting in a very small number of duplicates.All duplicates were omitted prior to analysis.Approval for this study was given by the ethics committees of the South Australian Department of Health and the University of Adelaide.Approval was also provided by the data custodians, who are representatives from the government departments that are responsible for the datasets.This study involved the use of de-identified data only.Regression models were used to estimate associations between BMI category and vulnerability on each of the developmental domains, using binary regression to estimate the relative risk and 95% confidence interval before and after adjustment for confounding variables.All analyses were carried out in Stata SE 13.The measures of child development were taken from the 2009 AEDC.In 2009, 93% of South Australian students had an AEDC checklist completed.In our dataset, the majority of children were aged 4–6 years when the AEDC was collected.Children outside this age range were excluded from the current study, leaving n = 16,515 children.Eighteen thousand, one hundred and forty children, who we estimated to be eligible for the 2009 AEDC, had taken part in the preschool health check and had valid BMI data; of these 7533 took part in the AEDC.Fifty seven percent of children were missing data on at least one of the confounding variables, with the highest levels of missing for variables collected in the school enrollment census.Children with complete information on all relevant variables tended to be more advantaged than the response sample.To minimize bias, missing data on the confounding variables were imputed, using multiple imputation by chained equations."Twenty datasets were generated, and results were combined using Rubin's rules.Imputation was carried out under a Missing At Random assumption.The imputation model included all outcome, exposure and confounding variables.The characteristics of the imputed data were comparable with the response sample.Results are reported for the imputed sample, unless otherwise stated.Fifteen percent of children were overweight, 6% were thin and 5% obese.A description of these children, in terms of socio-economic, demographic, and birth characteristics, is given in Table 1.Table 2 shows the prevalence of developmental vulnerability for thin, healthy, overweight, and obese children.The prevalence of vulnerability among healthy-weight children ranged from 4.1% for Language and Cognitive Skills to 8.9% for Emotional Maturity; 18.5% were vulnerable on one or more of the domains.Table 3 presents risk ratios for vulnerability of each of the developmental domains, and vulnerability on one or more domains among thin, overweight, and obese children, compared with healthy-weight children.The unadjusted point estimate for obese children reflected a higher relative risk of developmental vulnerability on the majority of the domains.For example, they were more likely to be developmentally vulnerable in terms of Physical Health and Wellbeing and Social Competence; they were also more likely to be vulnerable on one or more domains.Few differences in development were observed between thin and healthy-weight children, whereas children who were overweight tended to have lower risks on developmental vulnerability, particularly for Language and Cognitive skills.After adjusting for confounding factors, obese children were still twice as likely to be developmentally vulnerable on the Physical Health and Wellbeing domain.RRs for Social Competence, and for vulnerability on one or more domains, were attenuated but persisted, particularly for the latter.As in the unadjusted analyses, there were no observable differences between thin and healthy-weight children.The lower risks of developmental vulnerability in overweight children were attenuated in many cases but persisted for Language and Cognitive skills.The associations in the complete-case sample were similar.Due to the large differences in physical development seen across the BMI categories, we carried out additional exploratory analyses using the physical subdomains of the AEDC.After adjusting for confounding factors, elevated risks were seen for Physical Independence and Gross and Fine Motor skills, but less so for Physical Readiness for the School Day.As seen with the overall physical domain, no differences were observed for thin or overweight children.Information linked across a number of routine datasets was employed to examine the association between BMI and child development, in over 7500 South Australian children at school entry.We applied Cole et al. cut-offs to differentiate between thinness and healthy weight, unlike many previous studies which combined these groups as the baseline.We found no discernible differences in the physical, social, emotional, cognitive, and communication development of thin children as compared to their healthy-weight peers.In addition, our findings indicate that only obese children, at mean age 4.8 years, are more developmentally vulnerable a few months later.Specifically, obese children were approximately 30% more likely to be vulnerable in terms of social competence, although this was attenuated after adjustment.In addition, obese children were more than twice as likely to be developmentally vulnerable on the physical domain compared to healthy-weight children.This association remained after adjusting for a wide range of covariables, and across two physical subdomains.To our knowledge, our study is the first to examine the effects of thinness, overweight, and obesity on a global measure of child development that takes into account the physical, social, emotional, and cognitive development of children.If these effects are causal, we have shown that obesity appears to have a detrimental effect on some aspects of child development, particularly physical development, and to some extent social competence.In contrast, being overweight tended to have no effect on physical and social domains, yet possibly protective effects on the Language & Cognitive skills domain.We found that thin children appear to be no more or less likely to be developmentally vulnerable than their healthy-weight peers.Of the studies that combined overweight and obese children together, some found elevated risks of poor mental health, social functioning, and physical health; another found no relationship, although arguably, this study investigated children who were born before the onset of the obesity epidemic.It is possible that studies examining overweight may have seen bigger effect sizes for the obese children had they separated out these two groups.The few studies that have examined overweight and obesity separately, like our study, found that obesity had a greater detrimental impact on socio-emotional behavior and physical health.We found that obese children were around 30% more likely to be vulnerable on the Social Competence domain, which refers to overall social competence, responsibility and respect, approaches to learning, and readiness to explore new things.Comparability of the AEDC domains with the developmental measures used in other studies is limited, though earlier findings have shown that children who are obese have poorer socio-emotional well-being and behavior, as measured on the Strengths and Difficulties Questionnaire.Although the SDQ also captures emotional problems, in this study we found no association between BMI status and emotional maturity.Obese children were more than twice as likely to be vulnerable on the Physical Health and Wellbeing domain.A body of research has found that obese children have greater special health care needs and are more likely to experience wheeze and possibly infections and health-related limitations."While the Physical Health and Wellbeing domain of the AEDC does not capture specific health conditions in this way, it is designed to encapsulate any barriers that might impede a child's preparedness to learn and participate in school life. "It is therefore possible that the presence of health problems, such as wheezing, in a child might be reflected in teachers' ratings on the AEDC.The information collected in the AEDC not only allowed us to examine five broad developmental domains, but also to investigate more specific aspects of development using the subdomains.We did this for the Physical Health and Wellbeing domain because its association with obesity was especially high;.We found that this elevated risk remained in two of the three sub-domains.Several studies have found that overweight children have poorer gross motor skills than healthy-weight children, though associations tended to be for skills related directly to their weight, whereas fine motor skills or general coordination were not affected.We found that vulnerability on the Language and Cognitive Skills domain did not vary between healthy-weight and obese children, unlike two studies from America and Europe, which found a detrimental effect on cognitive development and academic scores.However, one of these studies concentrated on older children and it is possible that the impacts of BMI on cognition and academic performance is cumulative or become apparent at later ages when cognitive abilities have undergone further development.There was also the suggestion that overweight children had better language and communication skills; to our knowledge no other study has shown this finding and this should be examined further.This is the first study to explore the association between BMI and a holistic measure of early child development.Using IOTF cut-offs for overweight and obesity, and cut-offs for thinness, we examined the full spectrum of BMI categories.We differentiated between overweight and obesity, because it has been postulated that null findings in earlier studies are the result of combining these groups.Furthermore, unlike many earlier studies, we separated out thinness from healthy-weight children, which allowed us to examine whether there was an increased risk to development associated with under-nutrition among children from an otherwise well-nourished, high-income country.The advantage of using a holistic measure of development such as the AEDC is that we were able to explore whether some aspects of child development were more strongly related to BMI than others.Through linking to other routine datasets, such as perinatal hospital records and the school enrollment census, we were able to adjust for a wider range of covariables than typically used in analyses of routine data, and even some surveys.Nevertheless it is possible that the association we observed between obesity and child development is the result of unmeasured or residual confounding.For example, we were not able to adjust for family income, or attendance at childcare, both of which may be associated with child development and with childhood overweight."BMI status reflects whether the child's diet has been meeting their nutritional needs for an extended period of time.BMI is easier to collect and analyze, and subject to less measurement error than techniques for measuring the intakes of individual nutrients, such as a 24-h dietary recall.BMI is therefore a well-established marker of nutrition in research using population samples.That said, we acknowledge that, at an individual level, an unhealthy BMI will not always be the result of malnutrition.For example a low BMI could result from a period of illness."Equally, Cole's international BMI cut-offs provide a good measure of adiposity for monitoring weight at the population level; nevertheless they cannot provide an accurate measure of fat mass in individuals.Since only a small proportion of children were severely thin in South Australia, we were unable to examine the relationship between child development and different grades of thinness.Therefore, it remains possible that children of very low BMI have worse developmental outcomes and future research should examine this.All of our analyses were limited to children who had both BMI and AEDC data."We only had access to children's height and weight if they were recorded in the Women's and Children's Health Clinic database.Some children may have had their weight and height measured in alternative settings such as well-child checks conducted by General Practitioners."Whether the children who attend the Women's and Children's Health Clinics differ from children who attend well-child checks by GPs is not known because no data are available for comparison.However, we hypothesize that children measured in GP clinics may be more likely to have higher levels of health need, or use private health care.It is therefore possible that the data under-represent the unhealthiest children, or the children in the extremes of advantage and disadvantage."Children's height and weight were measured by community health nurses, and the AEDC was collected by school teachers, both of whom are likely to have reduced recall or social desirability biases, compared with parent report. "While this is a strength of our data, it remains possible that teachers' perceptions of a child's weight status may bias their reporting of the AEDC items.In the present analysis, we examined the association between BMI and a measure of child development captured at the start of school; future research should examine whether the association between pre-school BMI status and child development persists or changes into later childhood.Finally, we were unable to explore common underlying influences such as diet, chronic disease, and rare genetic conditions which may lead to extreme BMIs and be linked to impeded cognitive development, behavioral problems, and poorer physical well-being.Every child has the right to healthy development and the opportunity to fulfill their potential, but unfortunately, some groups of children fare worse than others.Inequalities in child development exist across countries and also within them, regardless of national wealth.In this study, we have shown that obese children are more likely to be developmentally vulnerable than their healthy-weight peers even as they start school.In particular, they are less likely to have the physical attributes required to maximize their potential to benefit from schooling.These include fine and gross motor skills, and physical independence.These differences persisted after adjusting for a range of covariables, including a number of socio-economic characteristics.Our findings point to the detrimental impacts of obesity and not overweight on child development.Despite this, we stress the importance of focusing on overweight as well obesity, since overweight children are at a greater risk of becoming obese.To our knowledge, this is the first study to examine the links between BMI status and a global measure of early child development.Furthermore it demonstrates the value of data linkage for enhancing routine data like the AEDC.Our analyses refer to 4–5 year old children, who had been attending primary school in Australia for approximately four months.Therefore, the associations observed cannot be arising as a result of school influences, but more likely from characteristics of parents, the home environment, childcare, and neighborhoods.While we know that early life influences, such as maternal pre-pregnancy BMI and sedentary behaviors are likely to be contributing to early childhood obesity, the majority of intervention studies for obesity prevention have focused on older children and the evidence base for preschool children is limited.Obesity prevention trials implemented from birth have resulted in negligible effects, and there is insufficient evidence around the wider societal and policy influences, such as parental employment and childcare, on preschool obesity.Efforts to increase the evidence base around the prevention of childhood obesity before children start school, including intervention studies and causal analysis of secondary data, should be continued.Around one fifth of children are overweight or obese by the time they start school.Every child has the right to healthy development, yet obese children are more likely to be developmentally vulnerable.The potential benefits of obesity reduction for physical health conditions in adulthood and life expectancy have already been widely documented.The findings from this study imply that tackling early childhood obesity may also have positive impacts for child development, leading to improvements in academic achievement and ultimately a fairer and more economically productive society.
Little is known about the holistic development of children who are not healthy-weight when they start school, despite one fifth of preschool-aged children in high income countries being overweight or obese. Further to this, there is a paucity of research examining low body mass index (BMI) in contemporary high-income populations, although evidence from the developing world demonstrates a range of negative consequences in childhood and beyond. We investigated the development of 4-6 year old children who were thin, healthy-weight, overweight, or obese (as defined by BMI z-scores) across the five domains of the Australian Early Development Census (AEDC): Physical Health and Wellbeing, Social Competence, Emotional Maturity, Language and Cognitive Skills, and Communication Skills and General Knowledge. We used a linked dataset of South Australian routinely collected data, which included the AEDC, school enrollment data, and perinatal records (n = 7533). We found that the risk of developmental vulnerability among children who were thin did not differ from healthy-weight children, after adjusting for a range of perinatal and socio-economic characteristics. On the whole, overweight children also had similar outcomes as their healthy-weight peers, though they may have better Language and Cognitive skills (adjusted Risk Ratio [aRR] = 0.73 [95% CI 0.50-1.05]). Obese children were more likely to be vulnerable on the Physical Health and Wellbeing (2.20 [1.69, 2.87]) and Social Competence (1.31 [0.94, 1.83]) domains, and to be vulnerable on one or more domains (1.45 [1.18, 1.78]). We conclude that children who are obese in the first year of school may already be exhibiting some developmental vulnerabilities (relative to their healthy-weight peers), lending further support for strategies to promote healthy development of preschoolers.
452
Genomic Determinants of Protein Abundance Variation in Colorectal Cancer Cells
Tumors exhibit a high degree of molecular and cellular heterogeneity due to the impact of genomic aberrations on protein networks underlying physiological cellular activities.Modern mass-spectrometry-based proteomic technologies have the capacity to perform highly reliable analytical measurements of proteins in large numbers of subjects and analytes, providing a powerful tool for the discovery of regulatory associations between genomic features, gene expression patterns, protein networks, and phenotypic traits.However, understanding how genomic variation affects protein networks and leads to variable proteomic landscapes and distinct cellular phenotypes remains challenging due to the enormous diversity in the biological characteristics of proteins.Studying protein co-variation holds the promise to overcome the challenges associated with the complexity of proteomic landscapes as it enables grouping of multiple proteins into functionally coherent groups and is now gaining ground in the study of protein associations.Colorectal cancer cell lines are widely used as cancer models; however, their protein and phosphoprotein co-variation networks and the genomic factors underlying their regulation remain largely unexplored.Here, we studied a panel of 50 colorectal cancer cell lines using isobaric labeling and tribrid mass spectrometry proteomic analysis in order to assess the impact of somatic genomic variants on protein networks.This panel has been extensively characterized by whole-exome sequencing, gene expression profiling, copy number and methylation profiling, and the frequency of molecular alterations is similar to that seen in clinical colorectal cohorts.First, we leveraged the robust quantification of over 9,000 proteins to build de novo protein co-variation networks, and we show that they are highly representative of known protein complexes and interactions.Second, we rationalize the impact of genomic variation in the context of the cancer cell protein co-variation network to uncover protein network vulnerabilities.Proteomic and RNA sequencing analysis of human induced pluripotent stem cells engineered with gene knockouts of key chromatin modifiers confirmed that genomic variation can be transmitted from directly affected proteins to tightly co-regulated distant gene products through protein interactions.Overall, our results constitute an in-depth view of the molecular organization of colorectal cancer cells widely used in cancer research.To assess the variation in protein abundance and phosphorylation within a panel of 50 colorectal cancer cell lines, we utilized isobaric peptide labeling and MS3 quantification.We obtained relative quantification between the different cell lines for an average of 9,410 proteins and 11,647 phosphopeptides.To assess the reproducibility of our data, we computed the coefficient of variation of protein abundances for 11 cell lines measured as biological replicates.The median CV in our study was 10.5%, showing equivalent levels of intra-laboratory biological variation with previously published TMT data for seven colorectal cancer cell lines.Inter-laboratory comparison for the 7 cell lines common in both studies showed median CV = 13.9%.The additional variation encompasses differences in sample preparation methods, mass spectrometry instrumentation, and raw signal processing.The same SW48 protein digest aliquoted in two parts and labeled with two different TMT labels within the same 10plex experiment displayed a median CV = 1.9%, indicating that the labeling procedure and the mass spectrometry signal acquisition noise have very small contribution to the total variation.The protein abundance profiles for 11 cell lines measured as biological replicates in two separate sets are shown as a heatmap in Figure S1D, revealing the high heterogeneity of the COREAD proteomic landscapes.The variation between different cell lines was on average 3 times higher than the variation between replicates, with 93% of the proteins exhibiting an inter-sample variation greater than the respective baseline variation between replicates.For proteins participating in basic cellular processes, the median CV between biological replicates was as low as 8%.At the phosphopeptide level, the SW48 biological replicates across all multiplex sets displayed a median CV = 22%, reflecting the generally higher uncertainty of the peptide level measurements compared to the protein level measurements.Taken together, our results show that protein abundance differences as low as 50% or 1.5-fold can be reliably detected using our proteomics approach at both the proteome and phosphoproteome level.Qualitatively, phosphorylated proteins were highly enriched for spliceosomal and cell cycle functions and covered a wide range of cancer-related pathways.The phosphosites were comprised of 86% serine, 13% threonine, and <1% tyrosine phosphorylation, and the most frequent motifs identified were pS-P and pT-P.Approximately 70% of the quantified phosphorylation sites are cataloged in Uniprot, and 751 of these represent known kinase substrates in the PhosphoSitePlus database.In terms of phosphorylation quantification, we observed that phosphorylation profiles were strongly correlated with the respective protein abundances, and therefore, to detect net phosphorylation changes, we corrected the phosphorylation levels for total protein changes by linear regression.Correlation analysis between mRNA and relative protein abundances for each gene across the cell lines indicated a pathway-dependent concordance of protein/mRNA expression with median Pearson’s r = 0.52.Highly variable mRNAs tend to correspond to highly variable proteins, although with a wide distribution.Notably, several genes, including TP53, displayed high variation at the protein level despite the low variation at the mRNA level, implicating significant post-transcriptional modulation of their abundance.Our COREAD proteomics and phosphoproteomics data can be downloaded from ftp://ngs.sanger.ac.uk/production/proteogenomics/WTSI_proteomics_COREAD/ in annotated ∗.gct, ∗.gtf, and ∗.bb file formats compatible with the Integrative Genomics Viewer, the Morpheus clustering web tool, or the Ensembl and University of California Santa Cruz genome browsers.Our proteomics data can also be viewed through the Expression Atlas database.The protein abundance measurements allowed us to study the extent to which proteins tend to be co-regulated in abundance across the colorectal cancer cell lines.We first computed the Pearson’s correlation coefficients between proteins with known physical interactions in protein complexes cataloged in the CORUM database.We found that the distribution of correlations between CORUM protein pairs was bimodal and clearly shifted to positive values with mean 0.33, whereas all pairwise protein-to-protein correlations displayed a normal distribution with mean 0.01.Specifically, 290 partially overlapping CORUM complexes showed a greater than 0.5 median correlation between their subunits.It has been shown that high-stoichiometry interactors are more likely to be coherently expressed across different cell types; therefore, our correlation data offer an assessment of the stability of known protein complexes in the context of colorectal cancer cells.Moreover, less stable or context-dependent interactions in known protein complexes may be identified by outlier profiles.Such proteins, with at least 50% lower individual correlation compared to the average complex correlation, are highlighted in Table S3.For example, the ORC1 and ORC6 proteins displayed a divergent profile from the average profile of the ORC complex, which is in line with their distinct roles in the replication initiation processes.In contrast, the distribution of Pearson’s coefficients between CORUM pairs based on mRNA co-variation profiles was only slightly shifted toward higher correlations with mean = 0.096.Interestingly, proteins with strong correlations within protein complexes showed low variation across the COREAD panel and have poor correspondence to mRNA levels.Together, these suggest that the subunits of most of the known protein complexes are regulated post-transcriptionally to accurately maintain stable ratio of total abundance.Receiver operating characteristic analyses confirmed that our proteomics data outperformed mRNA data in predicting protein complexes as well as high confident STRING interactions.The ability to also predict any type of STRING interaction suggests that protein co-variation also encompasses a broader range of functional relationships beyond structural physical interactions.Overall, our results demonstrate that correlation analysis of protein abundances across a limited set of cellular samples with variable genotypes can generate co-variation signatures for many known protein-protein interactions and protein complexes.We conducted a systematic un-biased genome-wide analysis to characterize the colorectal cancer cell protein-protein correlation network and to identify de novo modules of interconnected proteins.To this end, we performed a weighted correlation network analysis using 8,295 proteins quantified in at least 80% of the cell lines.A total of 284 protein modules ranging in size from 3 to 1,012 proteins were inferred covering the entire input dataset.An interaction weight was assigned to each pair of correlating proteins based on their profile similarities and the properties of the network.We performed Gene Ontology annotation of the modules with the WGCNA package as well as using additional terms from CORUM, KEGG, GOBP-slim, GSEA, and Pfam databases with a Fisher’s exact test.We found significantly enriched terms for 235 modules with an average annotation coverage of 40%.Specifically, 111 modules displayed overrepresentation of CORUM protein complexes.For 29 of the 49 not-annotated modules, we detected known STRING interactions within each module, suggesting that these also capture functional associations that do not converge to specific terms.The correlation networks of protein complexes with more than 2 nodes are shown in Figure 1C.The global structure of the colorectal cancer network comprised of modules with at least 50 proteins is depicted in Figure 1D and is annotated by significant terms.The entire WGCNA network contains 87,420 interactions, encompassing 7,248 and 20,969 known CORUM and STRING interactions of any confidence, respectively.Overlaying the protein abundance levels on the network generates a unique quantitative map of the cancer cell co-variome, which can help discriminate the different biological characteristics of the cell lines.For instance, it can be inferred that the CL-40 cell line is mainly characterized by low abundances of cell cycle, ribosomal, and RNA metabolism proteins, which uniquely coincide with increased abundances of immune response proteins.The full WGCNA network with weights greater than 0.02 is provided in Table S5.As most of the proteins in modules representing protein complexes are poorly correlated with mRNA levels, we next sought to understand the transcriptional regulation of the modules with the highest mean mRNA-to-protein correlations.These included several large components of the co-variome, modules showing enrichment for experimental gene sets, and modules containing proteins encoded by common chromosomal regions, implicating the effects of DNA copy number variations.In order to further annotate the modules with potential transcriptional regulators, we examined whether transcription factors that are members of the large transcriptionally regulated modules are co-expressed along with their target genes at the protein level.Transcription factor enrichment analysis indicated that the “xenobiotic and small molecule metabolic process” module was enriched for the transcription factors HNF4A and CDX2 and that STAT1/STAT2 were the potential master regulators of the “immune response” module.HNF4A is an important regulator of metabolism, cell junctions, and the differentiation of intestinal epithelial cells and has been previously associated with colorectal cancer proteomic subtypes in human tumors analyzed by the CPTAC consortium.Here, we were able to further characterize the consequences of HNF4A variation through its proteome regulatory network.To globally understand the interdependencies of protein complexes in the colorectal cancer cells, we plotted the module-to-module relationships as a correlation heatmap using only modules enriched for protein complexes.The representative profile of each module was used as a metric.This analysis captures known functional associations between protein complexes and reveals the higher order organization of the proteome.The major clusters of the correlation map can be categorized into three main themes: gene expression/splicing/translation/cell cycle; protein processing and trafficking; and mitochondrial functions.This demonstrates that such similarity profiling of abundance signatures has the potential to uncover novel instances of cross-talk between protein complexes and also to discriminate sub-complexes within larger protein assemblies.In addition to protein abundance co-variation, the scale of global phosphorylation survey accomplished here offers the opportunity for the de novo prediction of kinase-substrate associations inferred by co-varying phosphorylation patterns that involve kinases.Correlation analysis among 436 phosphopeptides attributed to 137 protein kinases and 29 protein phosphatases yielded 186 positive and 40 negative associations at Benj.Hoch.FDR < 0.1, representing the co-phosphorylation signature of kinases and phosphatases in the COREAD panel.Using this high-confidence network as the baseline, we next focused on co-phosphorylation profiling of kinases and phosphatases involved in KEGG signaling pathways, where known kinase relationships can be used to assess the validity of the predictions.We found co-regulated phosphorylation between RAF1, MAPK1, MAPK3, and RPS6KA3, which were more distantly correlated with the co-phosphorylated BRAF and ARAF protein kinases, all members of the mitogen-activated protein kinase pathway core axis.MAP2K1 was found phosphorylated at T388, which was not correlating with the above profile.The S222 phosphorylation site of MAP2K2, regulated by RAF kinase, was not detected possibly due to limitations related to the lengthy theoretical overlapping peptide.Strongly maintained co-phosphorylation between CDK1, CDK2, and CDK7 of the cell cycle pathway was another true positive example.The correlation plots of MAPK1 and MAPK3 phosphorylation and total protein are depicted in Figure S4C, top panel.The co-phosphorylation of BRAF and ARAF is depicted in Figure S4C, bottom left panel.A negative correlation example, reflecting the known role of PPP2R5D as an upstream negative regulator of CDK1, is shown in Figure S4C, bottom right panel.Taken together, our correlation analyses reveal the higher-order organization of cellular functions.This well-organized structure is shaped by the compartmental interactions between protein complexes, and it is clearly divided into transcriptionally and post-transcriptionally regulated sectors.The analysis performed here constitutes a reference point for the better understanding of the underlying biological networks in the COREAD panel.The resolution and specificity of the protein clusters can be further improved by the combinatorial use of alternative algorithms for construction of biological networks.Similarly, correlation analysis of protein phosphorylation data demonstrates that functional relationships are encrypted in patterns of co-regulated or anti-regulated phosphorylation events.Assessing the impact of non-synonymous protein coding variants and copy number alterations on protein abundance is fundamental in understanding the link between cancer genotypes and dysregulated biological processes.To characterize the impact of genomic alterations on the proteome of the COREAD panel, we first examined whether driver mutations in the most frequently mutated colorectal cancer driver genes could alter the levels of their protein products.For 10 out of 18 such genes harboring driver mutations in at least 5 cell lines, we found a significant negative impact on the respective protein abundances, in line with their function as tumor suppressors, whereas missense mutations in TP53 were associated with elevated protein levels as previously reported.For the majority of driver mutations in oncogenes, there was no clear relationship between the presence of mutations and protein expression.From these observations, we conclude that mutations in canonical tumor suppressor genes predicted to cause nonsense-mediated decay of transcript generally result in a decrease of protein abundance.This effect, however, varies between the cell lines.We extended our analysis to globally assess the effect of mutations on protein abundances.For 4,658 genes harboring somatic single-amino-acid substitutions in at least three cell lines, only 12 proteins exhibited differential abundances in the mutated versus the wild-type cell lines at ANOVA test FDR < 0.1.Performing the analysis in genes with loss-of-function mutations showed that 115 out of the 957 genes tested presented lower abundances in the mutated versus the wild-type cell lines at ANOVA test FDR < 0.1.The STRING network of the top significant hits is depicted in Figure 3C and indicates that many of the affected proteins are functionally related.Overall, almost all proteins in a less stringent set with p value < 0.05 were found to be downregulated by LoF mutations, confirming the general negative impact on protein abundances.As expected, zygosity of LoF mutations was a major determinant of protein abundance, with homozygous mutations imposing a more severe downregulation compared to heterozygous mutations.Whereas the negative impact of LoF mutations was not biased toward their localization in specific protein domains, we found that mutations localized closer to the protein C terminus were slightly less detrimental.Notably, genes with LoF mutations and subsequently the significantly affected proteins displayed an overrepresentation of chromatin modification proteins over the identified proteome as the reference set.Chromatin modifiers play an important role in the regulation of chromatin structure during transcription, DNA replication, and DNA repair.Impaired function of chromatin modifiers can lead to dysregulated gene expression and cancer.Our results show that loss of chromatin modification proteins due to the presence of LoF mutations is frequent among the COREAD cell lines and represents a major molecular phenotype.A less-pronounced impact of LoF mutations was found at the mRNA level, where only 29 genes exhibited altered mRNA abundances in the mutated versus the wild-type cell lines at ANOVA test FDR < 0.1.The overlap between the protein and mRNA level analyses is depicted in Figure 3F. Even when we regressed out the mRNA levels from the respective protein levels, almost 40% of the proteins previously found to be significantly downregulated were recovered at ANOVA test FDR < 0.1 and the general downregulation trend was still evident.On the contrary, regression of protein values out of the mRNA values strongly diminished the statistical significance of the associations between mutations and mRNA levels.The fact that LoF mutations have a greater impact on protein abundances compared to the mRNA levels suggests that an additional post-transcriptional or a post-translational mechanism is involved in the regulation of the final protein abundances.Lastly, 24 of the genes downregulated at the protein level by LoF mutations have been characterized as essential genes in human colon cancer cell lines.Such genes may be used as targets for negative regulation of cancer cell fitness upon further inhibition.We also explored the effect of 20 recurrent copy number alterations, using binary-type data, on the abundances of 207 quantified proteins falling within these intervals.Amplified genes tended to display increased protein levels, whereas gene losses had an overall negative impact on protein abundances with several exceptions.The 49 genes for which protein abundance was associated with CNAs at ANOVA p value < 0.05 were mapped to 13 genomic loci, with 13q33.2 amplification encompassing the highest number of affected proteins.Losses in 18q21.2, 5q21.1, and 17p12 loci were associated with reduced protein levels of three important colorectal cancer drivers: SMAD4; APC; and MAP2K4, respectively.Increased levels of CDX2 and HNF4A transcription factors were significantly associated with 13q12.13 and 20q13.12 amplifications.The association of these transcription factors with a number of targets and metabolic processes as found by the co-variome further reveals the functional consequences of the particular amplified loci.All proteins affected by LoF mutations and recurrent CNAs are annotated in Table S1.Overall, we show that the protein abundance levels of genes with mutations predicted to cause nonsense-mediated mRNA decay are likely to undergo an additional level of negative regulation, which involves translational and/or post-translational events.The extent of protein downregulation heavily depends on zygosity and appears to be independent from secondary structure features and without notable dependency on the position of the mutation on the encoded product.Missense mutations rarely affect the protein abundance levels with the significant exception of TP53.We conclude that only for a small portion of the proteome can the variation in abundance be directly explained by mutations and DNA copy number variations.As tightly controlled maintenance of protein abundance appears to be pivotal for many protein complexes and interactions, we hypothesize that genomic variation can be transmitted from directly affected genes to distant gene protein products through protein interactions, thereby explaining another layer of protein variation.To assess the frequency of such events, we retrieved strongly co-varying interactors of the proteins downregulated by LoF mutations to construct mutation-vulnerable protein networks.For stringency, we filtered for known STRING interactions additionally to the required co-variation.We hypothesize that, in these subnetworks, the downregulation of a protein node due to LoF mutations can also lead to the downregulation of interacting partners.These sub-networks were comprised of 306 protein nodes and 278 interactions and included at least 10 well-known protein complexes.Two characteristic examples were the BAF and PBAF complexes, characterized by disruption of ARID1A, ARID2, and PBRM1 protein abundances.To confirm whether the downregulation of these chromatin-remodeling proteins can affect the protein abundance levels of their co-varying interactors post-transcriptionally, we performed proteomics and RNA-seq analysis on CRISPR-Cas9 knockout clones of these genes in isogenic human iPSCs.We found that downregulation of ARID1A protein coincided with diminished protein levels of 7 partners in the predicted network.These show the strongest correlations and are known components of the BAF complex.In addition, reduced levels of ARID2 resulted in the downregulation of three partners unique to the PBAF complex, with significant loss of PBRM1 protein.Several components of the BAF complex were also compromised in the ARID2 KO, reflecting shared components of the BAF and PBAF complexes.Conversely, loss of PBRM1 had no effect on ARID2 protein abundance or any of its module components, in line with the role of PBRM1 in modifying PBAF targeting specificity.The latter demonstrates that collateral effects transmitted through protein interactions can be directional.ARID1A, ARID2, and PBRM1 protein abundance reduction was clearly driven by their respective low mRNA levels; however, the effect was not equally strong in all three genes.Strikingly, the interactors that were affected at the protein level were not regulated at the mRNA level, confirming that the regulation of these protein complexes is transcript independent.ARID1A KO yielded the highest number of differentially expressed genes; however, these changes were poorly represented in the proteome.Although pathway-enrichment analysis in all KOs revealed systematic regulation of a wide range of pathways at the protein level, mostly affecting cellular metabolism, we didn’t identify such regulation at the mRNA level.This suggests that the downstream effects elicited by the acquisition of genomic alterations in the particular genes are distinct between gene expression and protein regulation.The latter prompted us to systematically interrogate the distant effects of all frequent colorectal cancer driver genomic alterations on protein and mRNA abundances by protein and gene expression quantitative trait loci analyses.We identified 86 proteins and 196 mRNAs with at least one pQTL and eQTL, respectively, at 10% FDR.To assess the replication rates between independently tested QTL for each phenotype pair, we also performed the mapping using 6,456 commonly quantified genes at stringent and more relaxed significance cutoffs.In both instances, we found moderate overlap, with 41%–64% of the pQTL validating as eQTLs and 39%–54% of the eQTLs validating as pQTL.Ranking the pQTL by the number of associations showed that mutations in BMPR2, RNF43, and ARID1A, as well as CNAs of regions 18q22.1, 13q12.13, 16q23.1, 9p21.3, 13q33.2, and 18q21.2 accounted for 62% of the total variant-protein pairs.The above-mentioned genomic loci were also among the top 10 eQTL hotspots.High-frequency hotspots in chromosomes 13, 16, and 18 associated with CNAs are consistent with previously identified regions in colorectal cancer tissues.We next investigated the pQTL for known associations between the genomic variants and the differentially regulated proteins.Interestingly, increased protein, but not mRNA, levels of the mediator complex subunits were associated with FBXW7 mutations, an ubiquitin ligase that targets MED13/13L for degradation.Overall, our findings indicate that an additional layer of protein variation can be explained by the collateral effects of mutations on tightly co-regulated partners in protein co-variation networks.Moreover, we show that a large portion of genomic variation affecting gene expression is not directly transmitted to the proteome.Finally, distant protein changes attributed to variation in cancer driver genes can be regulated directly at the protein level with indication of causal effects involving enzyme-substrate relationships.To explore whether our deep proteomes recapitulate tissue level subtypes of colorectal cancer and to provide insight into the cellular and molecular heterogeneity of the colorectal cancer cell lines, we performed unsupervised clustering based on the quantitative profiles of the top 30% most variable proteins without missing values by class discovery using the ConsensusClusterPlus method.Optimal separation by k-means clustering was reached using 5 colorectal proteomic subtypes.Our proteomic clusters overlapped very well with previously published tissue subtypes and annotations, especially with the classification described by De Sousa E Melo et al.Previous classifiers have commonly subdivided samples along the lines of “epithelial”, “microsatellite instability-H,” and “stem-like,” with varying descriptions.Our in-depth proteomics dataset not only captures the commonly identified classification features but provides increased resolution to further subdivide these groups.The identification of unique proteomic features pointing to key cellular functions gives insight into the molecular basis of these subtypes and provides clarity as to the differences between them.The CPS1 subtype is the canonical MSI-H cluster, overlapping with the CCS2 cluster identified by De Sousa E Melo et al., CMS1 from Guinney et al., and CPTAC subtype B.Significantly, CPS1 displays low expression of ABC transporters, which may lead to low drug efflux and contribute to the better response rates seen in MSI-H patients.Cell lines with a canonical epithelial phenotype clustered together but are subdivided into 2 subtypes.These subtypes displayed higher expression of HNF4A, indicating a more differentiated state.Whereas subtype CPS3 is dominated by transit-amplifying cell phenotypes, CPS2 is a more heterogeneous group characterized by a mixed TA and goblet cell signature.CPS2 is also enriched in lines that are hypermutated, including MSI-negative/hypermutated lines.However, lower activation of steroid biosynthesis and ascorbate metabolism pathways as well as lower levels of ABC transporters in CPS1 render this group clearly distinguishable from CPS2.We also observed subtle differences in the genes mutated between the two groups.RNF43 mutations and loss of 16q23.1 are common in CPS1.The separation into two distinct MSI-H/hypermutated classifications was also observed by Guinney et al., and may have implications for patient therapy and prognosis.Transit-amplifying subtype CPS3 can be distinguished from CPS2 by lower expression of cell cycle proteins; predicted low CDK1, CDK2, and PRKACA kinase activities based on the quantitative profile of known substrates from the PhosphoSitePlus database; and high PPAR signaling pathway activation.Common amplifications of 20q13.12 and subsequent high HNF4A levels indicate this cluster corresponds well with CPTAC subtype E.CPS3 also contains lines that are most sensitive to the anti-epidermal growth factor receptor antibody cetuximab.The commonly observed colorectal stem-like subgroup is represented by subtypes CPS4 and CPS5.These cell lines have also been commonly associated with a less-differentiated state by other classifiers, and this is reinforced by our dataset; subtype CPS4 and CPS5 have low levels of HNF4A and CDX1 transcription factors and correlate well with CMS4 and CCS3.Cells in CPS4 and CPS5 subtypes commonly exhibit loss of the 9p21.3 region, including CDKN2A and CDKN2B, whereas this is rarely seen in other subtypes.Interestingly, whereas CPS5 displays activation of the Hippo signaling pathway, inflammatory/wounding response, and loss of 18q21.2, CPS4 has a mesenchymal profile, with low expression of CDH1 and JUP similarly to CPTAC subtype C and high Vimentin.Finally, we found common systematic patterns between the COREAD proteomic subtypes and the CPTAC colorectal cancer proteomic subtypes in a global scale using the cell line signature proteins.The overlap between the cell lines and the CPTAC colorectal tissue proteomic subtypes is summarized in Figure S6F.Lastly, we detected 206 differentially regulated proteins between the MSI-high and MSI-low cell lines, which were mainly converging to downregulation of DNA repair and chromosome organization as well as to upregulation of proteasome and Lsm2-8 complex.Whereas loss of DNA repair and organization functions are the underlying causes of MSI, the upregulation of RNA and protein degradation factors indicate the activation of a scavenging mechanism that regulates the abundance of mutated gene products.Although a number of recent studies have investigated the power of different combinations of molecular data to predict drug response in colorectal cancer cell lines, these have been limited to using genomic, transcriptomic, and methylation datasets.We have shown above that the DNA and gene expression variations are not directly consistent with the protein measurements.Also, it has been shown that there is a gain in predictive power for some phenotypic associations when also using protein abundance and phosphorylation changes.To date, there has not been a comprehensive analysis of the effect on the predictive power from the addition of proteomics datasets in colorectal cancer.All of the colorectal cell lines included in this study have been extensively characterized by sensitivity data for 265 compounds.These include clinical drugs, drugs currently in clinical development, and experimental compounds.We built Elastic Net models that use as input features genomic, methylation, gene expression, proteomics, and phosphoproteomics datasets.We were able to generate predictive models where the Pearson correlation between predicted and observed IC50 was greater than 0.4 in 81 of the 265 compounds.Response to most drugs was often specifically predicted by one data type, with very little overlap.The number of predictive models per drug target pathway and data type is depicted in Figure S7C, highlighting the contribution of proteomics and phosphoproteomics datasets in predicting response to certain drug classes.Within the proteomics-based signatures found to be predictive for drug response, we frequently observed the drug efflux transporters ABCB1 and ABCB11.In all models containing these proteins, elevated expression of the drug transporter was associated with drug resistance, in agreement with previous results.Notably, protein measurements of these transporters correlated more strongly with response to these drugs than the respective mRNA measurements.Interestingly, ABCB1 and ABCB11 are tightly co-regulated, suggesting a novel protein interaction.Classifying the cell lines into two groups with low and high mean protein abundance of ABCB1 and ABCB11 revealed a strong overlap with drug response for 54 compounds.Representative examples of these drug associations are shown in Figure 7C. To confirm the causal association between the protein abundance levels of ABCB1, ABCB11, and drug response, we performed viability assays in four cell lines treated with docetaxel, a chemotherapeutic agent broadly used in cancer treatment.The treatments were performed in the presence or absence of an ABCB1 inhibitor and confirmed that ABCB1 inhibition increases sensitivity to docetaxel in the cell lines with high ABCB1 and ABCB11 levels.Given the dominant effect of the drug efflux proteins in drug response, we next tested whether additional predictive models could be identified by correcting the drug response data for the mean protein abundance of ABCB1 and ABCB11 using linear regression.With this analysis, we were able to generate predictive models for 41 additional drugs from all input datasets combined.Taken together, our results show that the protein expression levels of drug efflux pumps play a key role in determining drug response, and whereas predictive genomic biomarkers may still be discovered, the importance of proteomic associations with drug response should not be underestimated.Our analysis of colorectal cancer cells using in-depth proteomics has yielded several significant insights into both fundamental molecular cell biology and the molecular heterogeneity of colorectal cancer subtypes.Beyond static measurements of protein abundances, the quality of our dataset enabled the construction of a reference proteomic co-variation map with topological features capturing the interplay between known protein complexes and biological processes in colorectal cancer cells.We show that the subunits of protein complexes tend to tightly maintain their total abundance ratios post-transcriptionally, and this is a fundamental feature of the co-variation network.The primary level of co-variation between proteins enables the generation of unique abundance profiles of known protein interactions, and the secondary level of co-regulation between protein complexes can indicate the formation of multi-complex protein assemblies.Moreover, the identification of proteins with outlier profiles from the conserved profile of their known interactors within a given complex can point to their pleiotropic roles in the associated processes.Notably, our approach can be used in combination with high-throughput pull-down assays for further refinement of large-scale protein interactomes based on co-variation signatures that appear to be pivotal for many protein interactions.Additionally, our approach can serve as a time-effective tool for the identification of tissue-specific co-variation profiles in cancer that may reflect tissue-specific associations.As a perspective, our data may be used in combination with genetic interaction screens to explore whether protein co-regulation can explain or predict synthetic lethality.Another novel aspect that emerged from our analysis is the maintenance of co-regulation at the level of net protein phosphorylation.This seems to be more pronounced in signaling pathways, where the protein abundances are insufficient to indicate functional associations.Analogous study of co-regulation between different types of protein modifications could also enable the identification of modification cross-talk.This framework also enabled the identification of upstream regulatory events that link transcription factors to their transcriptional targets at the protein level and partially explained the components of the co-variome that are not strictly shaped by physical protein interactions.To a smaller degree, the module-based analysis was predictive of DNA copy number variations, exposing paradigms of simple cause-and-effect proteogenomic features of the cell lines.Such associations should be carefully taken into consideration in large-scale correlation analyses, as they do not necessarily represent functional relationships.The simplification of the complex proteomic landscapes into co-variation modules enables a more direct alignment of genomic features with cellular functions and delineates how genomic alterations affect the proteome directly and indirectly.We show that LoF mutations can have a direct negative impact on protein abundances further to mRNA regulation.Targeted deletion of key chromatin modifiers by CRISPR/cas9 followed by proteomics and RNA-seq analysis confirmed that the effects of genomic alterations can propagate through physical protein interactions, highlighting the role of translational or post-translational mechanisms in modulating protein co-variation.Additionally, our analysis indicated that directionality can be another characteristic of such interactions.We provide evidence that colorectal cancer subtypes derived from tissue level gene expression and proteomics datasets are largely recapitulated in cell-based model systems at the proteome level, which further resolves the main subtypes into groups.This classification reflects a possible cell type of origin and the underlying differences in genomic alterations.This robust functional characterization of the COREAD cell lines can guide cell line selection in targeted cellular and biochemical experimental designs, where cell-line-specific biological features can have an impact on the results.Proteomic analysis highlighted that the expression of key protein components, such as ABC transporters, is critical in predicting drug response in colorectal cancer.Whereas further work is required to establish these as validated biomarkers of patient response in clinical trials, numerous studies have noted the role of these channels in aiding drug efflux.In summary, this study demonstrates the utility of proteomics in different aspects of systems biology and provides a valuable insight into the regulatory variation in colorectal cancer cells.Cell pellets were lysed by probe sonication/boiling, and protein extracts were subjected to trypsin digestion.The tryptic peptides were labeled with the TMT10plex reagents, combined at equal amounts, and fractionated with high-pH C18 high-performance liquid chromatography.Phosphopeptide enrichment was performed with immobilized metal ion affinity chromatography.LC-MS analysis was performed on the Dionex Ultimate 3000 system coupled with the Orbitrap Fusion Mass Spectrometer.MS3 level quantification with Synchronous Precursor Selection was used for total proteome measurements, whereas phosphopeptide measurements were obtained with a collision-induced dissociation-higher energy collisional dissociation method at the MS2 level.Raw mass spectrometry files were subjected to database search and quantification in Proteome Discoverer 1.4 or 2.1 using the SequestHT node followed by Percolator validation.Protein and phosphopeptide quantification was obtained by the sum of column-normalized TMT spectrum intensities followed by row-mean scaling.Enrichment for biological terms, pathways, and kinases was performed in Perseus 1.4 software with Fisher’s test or with the 1D-annotation-enrichment method.Known kinase-substrate associations were downloaded from the PhosphoSitePlus database.All terms were filtered for Benjamini-Hochberg FDR < 0.05 or FDR < 0.1.Correlation analyses were performed in RStudio with Benjamini-Hochberg multiple testing correction.ANOVA and Welch’s tests were performed in Perseus 1.4 software.Permutation-based FDR correction was applied to the ANOVA test p values for the assessment of the impact of mutations and copy number variations on protein and mRNA abundances.Volcano plots, boxplots, distribution plots, scatterplots, and bar plots were drawn in RStudio with the ggplot2 and ggrepel packages.All QTL associations were implemented by LIMIX using a linear regression test.Conceptualization, J.S.C. and U.M.; Methodology, T.I.R., S.P.W., and J.S.C.; Cell Lines, S.P. and S.P.W.; Mass Spectrometry, T.I.R. and L.Y.; Data Analysis, T.I.R., E.G., F.Z.G., S.P.W., J.C.W., M.P., P.B., J.S.-R., A.B., and O.S.; Cell Lines Classification, R.D. and J.G.; Drug Data Analysis, N.A., M.M., M.S., M.Y., J.S.-R., S.P.W., T.I.R., L.W., and U.M.; CRISPR Lines RNA-Seq, C.A., M.D.C.V.-H., and D.J.A.; Writing – Original Draft, T.I.R., S.P.W., L.W., U.M., and J.S.C.; Writing – Review and Editing, all.
Assessing the impact of genomic alterations on protein networks is fundamental in identifying the mechanisms that shape cancer heterogeneity. We have used isobaric labeling to characterize the proteomic landscapes of 50 colorectal cancer cell lines and to decipher the functional consequences of somatic genomic variants. The robust quantification of over 9,000 proteins and 11,000 phosphopeptides on average enabled the de novo construction of a functional protein correlation network, which ultimately exposed the collateral effects of mutations on protein complexes. CRISPR-cas9 deletion of key chromatin modifiers confirmed that the consequences of genomic alterations can propagate through protein interactions in a transcript-independent manner. Lastly, we leveraged the quantified proteome to perform unsupervised classification of the cell lines and to build predictive models of drug response in colorectal cancer. Overall, we provide a deep integrative view of the functional network and the molecular structure underlying the heterogeneity of colorectal cancer cells.
453
Behavioural markers for autism in infancy: Scores on the Autism Observational Scale for Infants in a prospective study of at-risk siblings
Younger siblings of children with an autism spectrum disorder represent a high-risk group for ASD, with recent estimates of the recurrence rate in siblings as high as 18.7%.This allows prospective study of development from the first few months of life in infants who will later go on to receive a diagnosis of ASD.Within such prospective infant sibling designs, identifying the earliest differences or markers in those who go on to develop autism is a research priority.Current aetiological models of autism propose that typical developmental trajectories are derailed by complex interactions between underlying genetic and neurological vulnerabilities, environment and behaviour, with cascading developmental effects.However, the details of these developmental processes are poorly understood.It is hoped that understanding the ordering and interactive influences of the earliest biological and behavioural perturbations will elucidate developmental mechanisms that lead to the pattern of symptoms and impairments that characterise the clinical phenotype, as well as protective mechanisms that differentiate those at familial risk who go on to have non-ASD outcomes.This may in turn point to targets for treatment as well as improving identification of those infants at highest risk for the disorder in infancy, allowing very early intervention to be put in place.Recent reviews of high-risk sibling studies find convergent evidence for the emergence of overt behavioural markers between 12 and 18 months of age that distinguish, at a group level, those infants who go on to receive an ASD diagnosis from other HR infants and low-risk groups.At this age, clinically relevant behavioural differences span both social communication; social referencing) and stereotyped/repetitive behaviour; repetitive movement domains, as well as motor and attentional atypicalities, appearing to represent early manifestations of later ASD symptoms.However, there is considerable heterogeneity at the individual level in the pattern of symptom emergence.Prior to 12 months of age, however, few overt behavioural markers for autism have been identified.In a recent report, Jones and Klin found that in a small sample of HR infants who went on to an ASD diagnosis, fixation on the eyes declined between 2 and 6 months of age.Other behavioural signs in the first year of life have included reduced gaze to people and vocal atypicalities.Experimental studies have detected atypical neural response to social stimuli such as dynamic eye gaze from as young as 6 months of age.Whilst the research reviewed above and elsewhere has mostly used experimental tasks/paradigms and observational and parent-report methods, there is also a clinical need for an instrument that allows systematic observation of early-emerging atypicalities in infants at-risk for ASD.The Autism Observational Scale for Infants is a semi-structured, experimenter-led behavioural assessment designed to measure early behavioural markers of ASD in infants aged between 6 and 18 months.These include atypicalities or delays in social communication behaviours and non-social behaviours as well as aspects of temperament."Preliminary findings from the instrument's authors’ HR sibling cohort suggested that AOSI scores by 12 months but not at 6 months were promising as a predictor of later ASD outcomes based on 24 month ADOS classification.The same group have reported that whilst many individual AOSI items across domains at both age 6 and 18 months differentiated the group of HR siblings who go on to have an ASD classification at 36 months and LR controls, only atypical motor behaviour differentiated HR siblings who go on to have ASD from HR siblings who do not at both timepoints.Early behavioural atypicalities measured by the AOSI at 12 months have also been shown to characterise nearly one fifth of HR siblings who do not go on to have ASD, consistent with the notion of sub-clinical manifestations of ASD being present at an enhanced rate in family members of individuals with ASD, referred to as the broader autism phenotype.The present study sought to replicate in an independent sample whether predictive associations exist between AOSI scores in early and later infancy and ASD outcome at 36 months.In the present study we analysed AOSI data from both 7 and 14 month timepoints in a cohort of HR siblings and LR controls subsequently followed up at 24 and 36 months to answer the following questions:Do scores on the AOSI differ between HR siblings and LR controls at 7 and 14 months?,Do scores on the AOSI differ between those HR siblings who go on to have a diagnosis of ASD from those HR siblings who do not?,Within the HR group are there associations between AOSI scores at the 7 and 14 month timepoint and later scores on the Autism Diagnostic Observational Schedule at 24 and 36 months?,Ethical approval for the BASIS study was obtained from NHS NRES London REC.One or both parents gave informed, written consent for their child to participate.One hundred and four children were recruited as part of the British Autism Study of Infant Siblings.They were seen on four visits when aged 6–10 months, 11–18 months, and then around their 2nd birthday, and third birthday.Each HR infant had an older sibling with a community clinical ASD diagnosis, confirmed on the basis of information in the Development and Wellbeing Assessment and the Social Communication Questionnaire by expert clinicians on our team.Most probands met ASD criteria on both measures.While a small number scored below threshold on the SCQ, no exclusions were made due to meeting the DAWBA threshold and expert opinion.For two probands, data were only available on one measure, and for four probands, neither measure was available.Parent-reported family medical histories were examined for significant conditions in the proband or extended family members with no such exclusions deemed necessary.LR controls were full-term infants recruited from a volunteer database at the Birkbeck Centre for Brain and Cognitive Development.Medical history review confirmed lack of ASD within first-degree relatives.All LR infants had at least one older sibling.The SCQ was used to confirm absence of ASD in these older siblings, with no child scoring above instrument cut-off.The Autism Observation Scale for Infants is an experimenter-led, semi-structured observational assessment, developed to study the nature and emergence of ASD-related behavioural markers in infancy.A standard set of objects and toys are used across five activities – each with a specified series of presses for a particular behaviour – and two periods of free-play.Responses to presses and observations made throughout the assessment are used to code nineteen items.Each item is coded on a scale from 0 to 2 or 0 to 3.A rating of 0 denotes typical behaviour and higher scores denote increasing atypicality.In the current study the 19 item version of the AOSI reported by Brian et al. was used.The AOSI yields a Total Score."The AOSI is administered by a trained examiner who sits at a table opposite the infant who is held on the parent's lap.AOSIs were administered by research-reliable research staff and the majority of administrations were double-coded by the examiner and an observer.Agreement between the two coders was excellent at both 7 months and 14 months."When codes differed between researchers, they discussed and agreed on a consensus code, where no observer codes were available the examiner's code was used.All participants were assessed at all visits on the Mullen Scales of Early Learning—a measure of developmental abilities yielding an Early Learning Composite standardised score.In order to explore the association between verbal and nonverbal developmental abilities and AOSI scores we calculated mean T-scores from the two Verbal and two Nonverbal Mullen subscales.Of the 54 HR infants recruited, 53 were retained to the 36 m visit when comprehensive diagnostic assessment was undertaken.At 36 m parents of HR siblings completed the Autism Diagnostic Interview—Revised and the SCQ, and both HR and LR toddlers were assessed with the Autism Diagnostic Observation Schedule and the revised Social Affect and Repetitive and Restrictive Behaviours subtotal and calibrated severity scores computed.Assessors were not blind to risk-group status.Assessments were conducted by or under the close supervision of clinical researchers with demonstrated research-level reliability.Different teams of researchers saw participants at the first two visits and the second two visits.Those assessing developmental outcomes were blind to infants’ performance on the AOSI.In determining diagnostic outcome status, four clinical researchers reviewed information across the 24 m and 36 m visit.Seventeen toddlers met ICD-10 criteria for an ASD).The remaining 36 toddlers did not meet diagnostic criteria for ASD.For the LR control group, in the absence of a full developmental history no formal clinical diagnoses were assigned but none had a community clinical ASD diagnosis at 36 months.It is worth noting that the recurrence rate reported in the current study is higher than that reported in the large consortium paper published by Ozonoff and colleagues.This is likely to reflect the modest size at-risk sample in the current study.Whilst recurrence rates approaching 30% have been found in other moderate size samples these rates are sample specific and will likely not be generalizable as findings from larger samples where autism recurrence rates converge between 10% and 20%.Similar procedures combining all information from standard diagnostic measures and clinical observation and arriving at a ‘clinical best estimate’ ICD-10 diagnosis was used in the present study in line with other familial at-risk studies and was conducted by an experienced group of clinical researchers.HR siblings and LR controls did not significantly differ from each other in age at any visit, nor did the HR outcome groups or LR controls differ from one another in age at any visit.Whilst the ELC scores of the HR siblings were in the average range, at each visit their scores were lower than those for the LR controls.The HR-ASD group had lower ELC scores than the LR group at all four visits and lower ELC scores compared to the HR-No ASD group at the 14 m and 36 m visits but not the 7 m and 24 m visits.Due to the skewed distribution of the AOSI Total Score a square root transformation was applied and the transformed data met assumptions of normality, with the exception of the LR group at 14 m of age.HR versus LR scores were compared using ANOVA and HR-ASD, HR-No ASD and LR scores were compared using a one-way ANOVA and post-hoc Tukey HSD tests."Cohen's d effect sizes are reported.Following this in order to control for verbal and nonverbal developmental level the Mullen mean Verbal and Nonverbal T-scores were covaried and post-hoc least significant difference tests conducted.For individual AOSI items, HR versus LR scores were compared using Mann–Whitney tests and HR-ASD, HR-No ASD and LR scores were compared using Kruskal–Wallace tests and significant differences followed-up using post-hoc Mann–Whitney tests.Given the larger number of items but also allowing for the exploratory nature of the analysis, a moderately conservative significance level of p < .01 was used."Correlations between AOSI scores and the total scores on the ADOS at 24 months and 36 months in the HR group only were examined using Pearson's product moment correlations.As shown in Table 2, at 7 m the HR group had a higher AOSI score than the LR group F = 4.10, p < .045.At 14 m the HR group had a higher AOSI score than the LR group but the difference was not statistically significant F = 3.36, p = .07.However, when Mullen Verbal and Nonverbal T-score at each age point was covaried, the difference between the HR and LR groups was no longer significant = .66, p = .37; 14 m: F = 2.24, p = .14).At 7 m but not 14 m the covariate effect for Mullen Verbal T-score was significant = 4.48, p < .05).Comparing AOSI scores for the HR-ASD and HR-No ASD outcome groups and the LR group, at 7 m the one-way ANOVA just missed significance F = 2.94, p = .058.At 14 m, the three outcome group comparison was significant F = 4.43, p = .014.Post-hoc Tukey tests showed that the HR-ASD group scored higher than the LR group.The HR-ASD group also had a marginally but not significantly higher AOSI score than the HR-No ASD group.These analyses were repeated covarying for Mullen Verbal and Nonverbal T-score at each age.The HR outcome groups and LR group did not differ from each other at 7 m of age, F = .61, p = .47 but did differ from each other at 14 m of age, F = 3.85, p = .03.Post-hoc LSD tests showed that at 14 m the HR-ASD group scored higher than both the LR group and the HR-No ASD group and that the HR-No ASD group and LR group did not differ from each other.At 7 m but not 14 m the covariate effect for Mullen Verbal T-score was significant = 4.31, p < .05).In terms of individual items, at 7 m the HR and LR groups did not differ on any items but the HR group scored higher than the LR controls on one item at 14 m.In terms of HR-ASD vs. HR-No ASD outcome groups vs. LR group differences, at 7 m there were significant differences for the following items: visual tracking and Social Referencing.At 14 m there were significant differences for the following items: orientation to name, Engagement of Attention and Social Referencing.To examine associations over time between behavioural atypicality as measured by the AOSI in infancy and the ADOS in toddlerhood correlations were examined in the HR group.Since the AOSI measures attentional disengagement and atypical motor behaviours, as well early social communication behaviours, AOSI scores were compared to the ADOS ‘total score’.AOSI score at 7 m was not associated with ADOS score at either 24 m or 36 m.However, AOSI score at 14 m was significantly associated both with ADOS score at 24 m and at 36 m.Consistent with previous reports behavioural atypicalities as measured by the AOSI differentiated the HR and LR groups in the first year and early in the second year of life.At 14 months these behavioural markers also discriminated between those HR infants who went on to an ASD diagnosis and LR controls and marginally between HR who did and did not go on to an ASD diagnosis.The HR versus LR group comparisons were attenuated when verbal and nonverbal developmental level was controlled but the HR-ASD outcome group differences remained significant.The HR-No ASD outcome group scored intermediate between the HR-ASD outcome group and the LR control group.Several aspects of this pattern of differences are worthy of comment.First, the HR versus LR group comparisons are modest only in effect size.However, in those HR infants who went on to meet diagnostic criteria for ASD at 36 months by the 14 month timepoint the effect sizes were large and the presence of atypical or unusual behaviours the effects of developmental level as measured by the Mullen verbal and nonverbal subscales did not predominate, with only verbal ability at 7 month being significantly associated with the AOSI total score.Finally, the HR-No ASD group performed intermediate between the HR-ASD group and LR group, both on overall total scores and on individual items.The boxplots in Fig. 1 show that the interquartile range of this group span across those of both the HR-ASD and LR groups, suggesting that characteristics that might be considered as aspect of the early ‘broader autism phenotype’ are seen in some but not all of the infants at familial high-risk who do not go onto an ASD presentation at 36 months of age.Although the initial report on the AOSI found that scores were predictive of a diagnosis at 12 months but not 6 months, subsequent studies have found that some behaviours at 6 months differentiate HR siblings and LR controls and also those HR siblings who go on to have an ASD diagnosis from both LR controls and HR siblings who do not go on to have an ASD.In terms of individual AOSI items we found no HR vs. LR group differences at 7 m and HR siblings only scored higher than LR controls on one item at 14 m.We found that social referencing at 7 m and orientation to name also at 14 m differentiated the HR siblings who went on to have an ASD from LR controls.Furthermore, in line with the BAP concept, HR siblings who did not go on to have an ASD also showed higher levels of atypical behaviour at 7 m and 14 m, compared to LR controls.However, these item-level findings should be considered exploratory given that a strict correction for multiple testing was not undertaken and require replication in other samples.In contrast to previous reports, we conservatively adjusted for the differences in verbal and non-verbal developmental ability between the groups.It is increasingly apparent, consistent with the phenotype of ASD, that both developmental and language delays are part of the BAP at a group level and, whilst covarying for these differences one might be taking out some of the variance of interest, early atypical behaviours still discriminated between the infants who went on to have ASD and those who did not.Within the HR sibling group we examined the association between early behavioural atypicalities as measured by the AOSI and later early symptoms of autism as measured by the ADOS.AOSI scores at 7 months were not associated with later ADOS scores but AOSI scores at 14 months were moderately associated with ADOS scores at 24 and 36 months.Although the two instruments are not identical there is a considerable overlap in the concepts, behaviours and scoring systems, and this suggests a moderate degree of continuity of autistic-like behavioural atypicality from the beginning of the second year of life into the toddler years.However, as has been found with many experimental measures, with a few exceptions, this continuity is not apparent from as early as 6 to 8 months of age.As such, it appears that the AOSI is successfully capturing very early emerging autistic behaviours.Larger samples will be required in order to test both how predictive such early behavioural markers are at an individual, as opposed to a group, level and in order to trace the longitudinal trajectory of the emergence of such behaviours over this early time course.This work is much needed as increasingly in some communities concerns about possible autism are raised about some children in the second year of life, in particular younger siblings of a child with an ASD given the now well-established recurrence rate of between 10% and 20%.We consider these findings preliminary due to the modest sample size and they will require confirmation in larger and other independent samples."However, it is the first independent report on the AOSI and replicates some of the findings from the instrument's originators.We also note several limitations in the design, including non-blind assessment at both the infancy and toddler assessments, although the team conducting the toddler visits were blind to infant AOSI scores.The current findings confirm the emerging picture that early behavioural atypicalities in emergent ASD include both social and non-social behaviours.Some of these atypicalities are found only in HR siblings who go on to have ASD but others are also found in HR siblings who do not, supporting the notion of an early broader autism phenotype.Understanding the interplay between different neurodevelopmental domains across the first years of life and the influences on these will be important both to understand the developmental mechanisms that lead to the ASD behavioural phenotype and to inform approaches to developing early interventions.
We investigated early behavioural markers of autism spectrum disorder (ASD) using the Autism Observational Scale for Infants (AOSI) in a prospective familial high-risk (HR) sample of infant siblings (N= 54) and low-risk (LR) controls (N= 50). The AOSI was completed at 7 and 14 month infant visits and children were seen again at age 24 and 36 months. Diagnostic outcome of ASD (HR-ASD) versus no ASD (HR-No ASD) was determined for the HR sample at the latter timepoint. The HR group scored higher than the LR group at 7 months and marginally but non-significantly higher than the LR group at 14 months, although these differences did not remain when verbal and nonverbal developmental level were covaried. The HR-ASD outcome group had higher AOSI scores than the LR group at 14 months but not 7 months, even when developmental level was taken into account. The HR-No ASD outcome group had scores intermediate between the HR-ASD and LR groups. At both timepoints a few individual items were higher in the HR-ASD and HR-No ASD outcome groups compared to the LR group and these included both social (e.g. orienting to name) and non-social (e.g. visual tracking) behaviours. AOSI scores at 14 months but not at 7 months were moderately correlated with later scores on the autism diagnostic observation schedule (ADOS) suggesting continuity of autistic-like behavioural atypicality but only from the second and not first year of life. The scores of HR siblings who did not go on to have ASD were intermediate between the HR-ASD outcome and LR groups, consistent with the notion of a broader autism phenotype.
454
Undiscovered porphyry copper resources in the Urals—A probabilistic mineral resource assessment
The Ural Mountains expose one of the world's best preserved Paleozoic metallogenic belts.The 2000-km-long mobile belt that separates the eastern European and western Siberian cratons hosts ore deposits formed in island-arc, continental-margin arc, and syn- and post-collisional geodynamic settings that record a complex history of subduction, accretion, arc-continent collisions, and ocean basin closures.Magmatism associated with these events produced world-class VMS deposits, magmatic chromite deposits, PGE-Fe-Ti deposits associated with mafic and ultramafic rocks, massive magnetite deposits, and porphyry copper deposits.Although the copper in the Urals region is mainly produced from volcanogenic massive sulfide deposits, recent discoveries suggest that it may also be an important porphyry copper province.In addition to copper, the porphyry deposits in the Urals are likely a significant source of molybdenum and gold.Paleozoic porphyry copper deposits and prospects occur throughout the eastern Urals in a series of north-south trending fault-bounded tectonic zones.These zones lie between the Main Uralian Fault on the west and the Transuralian zone and West Siberian Basin on the east.The potential for undiscovered resources associated with porphyry copper deposits in the Urals was assessed by outlining geographic areas as permissive tracts that may host undiscovered porphyry copper deposits on the basis of geology and the distribution of known deposits and prospects, estimating numbers of undiscovered deposits in those tracts, and simulating amounts of undiscovered resources.In addition, an economic filter was used to evaluate the portion of the undiscovered resources that are likely to be economically recoverable based on engineering cost models.The Urals study was done as part of a global copper mineral resource assessment.Identified resources have been reported for eight porphyry copper deposits in the Urals.Note that the identified resources for the smallest porphyry copper deposits listed in Table 1, Yubileinoe and Birgilda, were classified as porphyry copper prospects by Singer et al.; Tarutino is primarily a skarn deposit.Some of the tonnage and grade data meet current internationally accepted reporting standards; other data reported in the literature use Soviet-style resource and reserve classifications.For this assessment, the term “prospect” is used for the known and suspected porphyry copper occurrences that are only partially characterized by exploration, such as those that report grade information but no ore tonnages.Undiscovered resources may be present in known prospects or in completely new areas throughout the permissive tracts.The form of quantitative mineral resource assessment described by Singer and Menzie was adopted for the global mineral resource assessment of porphyry copper deposits.In this form of assessment, geographic areas are delineated using available data on geologic features typically associated with the type of deposit under consideration, as reported in descriptive mineral deposit models.The amount of metal contained in undiscovered deposits is estimated using grade and tonnage models combined with probabilistic estimates of numbers of undiscovered deposits.Estimates of numbers of undiscovered deposits are made by experts at different confidence levels using a variety of estimation strategies.The estimates express the degree of belief that some fixed but unknown number of deposits exists within the tract.These estimates are a measure of the favorability of the permissive tract as well as of the uncertainty about what may exist.The tectonic settings, distinctive lithologies, and other diagnostic characteristics of porphyry copper deposits are based on descriptive models.Permissive tracts were outlined as geographic areas to include permissive Paleozoic igneous geologic map units and known porphyry copper deposits and prospects within a tectonic megazone were tested to decide if they are appropriate for use in the Urals.Statistical tests comparing grades and tonnages of deposits within the Urals with grades and tonnages in the models of Singer et al. indicated that a global porphyry Cu-Au-Mo model based on 422 deposits is appropriate for the assessment of the Urals.Monte Carlo methods are used to combine estimates of numbers of undiscovered deposits with grade and tonnage models to produce a probabilistic estimate of in-situ undiscovered resources.The results are further analyzed by applying economic filters to evaluate what portion of the in-situ undiscovered resources might be economic based on specified mining methods, metal prices, deposit depth, and quality of existing infrastructure.Two different packages of geologic and mineral-resource digital data were used, one for the Urals and one for Central Asia.These data packages include 1:1 million and 1:1.5 million scale geologic maps, as well as thematic maps, and mineral occurrence data.These maps were supplemented by selected geologic maps at various spatial scales.In addition, regional aeromagnetic data, published journal articles, books, symposia proceedings, government publications, and information obtained from various internet web sites were used.Previous topical studies on porphyry copper deposits of the Urals provided a framework for identifying magmatic arc complexes.Note that the assessment was done using information on deposits and prospects compiled as of 2014.See Table 1 of Plotinskaya et al. for a more comprehensive list of porphyry copper occurrences in the Urals.Aeromagnetic data extracted from the magnetic anomaly grid of the Former Soviet Union are shown as a reduced-to-pole map for the Urals in Fig. 2, along with some of the major tectonic zones and faults.These data helped define some of the permissive tract boundaries.International boundaries are from the U.S. Department of State.The Urals are the geographic expression of the complex tectonic boundary between the eastern European craton on the west, associated accreted terranes, and the composite Kazakh tectonic plate on the east.The Main Uralian Fault records the Late Devonian collision of the East European craton with accreted arc terranes to the east.VMS and porphyry copper deposits formed prior to collision.The arc terranes that host porphyry copper deposits in the Urals are preserved in several tectonic zones, referred to as volcanic arc megaterranes by Plotinskaya et al., 2017--in this volume.From west to east, these include the northern part of the Central Uralian zone and the Tagil zone in the North, Cis-Polar, and Polar Urals and the Magnitogorsk, East Uralian, and the Transuralian zones in the Middle and South Urals.See Plotinskaya et al. for a discussion of the geological framework, ages, and metallogeny of porphyry copper deposits of the Urals.Each of these tectonic zones includes Paleozoic magmatic arcs and metallogenic belts that host porphyry copper deposits.The Tagil zone extends from the middle Urals through the northern, Cis-Polar, and Polar Urals.The magmatic rocks of the Magnitogorsk zone are interpreted as the equivalent to the Tagil zone in the South Urals and a number of authors refer to the composite Tagil-Magnitogorsk megazone.The number, nature, and age range of magmatic arcs and arc fragments that now lie within the tectonic zones is not well-constrained because of the complex geologic history of the area.The oldest Paleozoic magmatic arc complexes recognized in the Urals are the Silurian to Middle Devonian arcs preserved in the Sakmara allochthon and Tagil zone.The Sakmara allochthon preserves Early Paleozoic sedimentary rocks that rifted off of the European craton to the west, mafic-ultramafic-complexes, and fragments of volcanic arc rocks that host chromite and VMS deposits west of the Main Uralian Fault.No porphyry copper deposits are known to be associated with the Sakmara zone.The Late Ordovician to Devonian Tagil arc within the Tagil zone is interpreted as an intra-oceanic arc that accreted to the Eastern European craton in the Early Carboniferous.In the Middle Urals, a Late Carboniferous to Permian strike-slip fault system reactivated the original Main Uralian suture and displaced the Tagil arc to the north.The Tagil zone is best known for a belt of the Silurian platinum-bearing zoned mafic-ultramafic complexes throughout the middle and northern Urals in the lower part of the Tagil arc.The arc evolved from tholeiitic to calc-alkaline to shoshonitic compositions, and includes both calc-alkaline and potassic igneous suites.Although many authors discuss the Tagil-Magnitogorsk tectonic zone as a single entity, we describe them separately for the purposes of this assessment because they are geographically distinct, accreted at different times, and exhibit different degrees of deformation and metamorphism.Furthermore, the poorly exposed, structurally dismembered Tagil arc is much less thoroughly studied than the Magnitogorsk arc and other parts of the southern Urals.During the Devonian, the Magnitogorsk oceanic arc was active above northeast-directed subducting ocean crust that separated it from the East European plate.By the end of the Early Carboniferous, both the Tagil and Magnitogorsk arcs were accreted to the East European craton.The suture between the East European craton and the Kazakh craton lies within the East Uralian zone.The Main Uralian fault zone in the southern Urals marks the likely continent-arc suture along which tectonic activity ceased by the Early Carboniferous.The East Uralian zone includes Late Devonian to Early Carboniferous calc-alkaline tonalite-granodiorite complexes and syn- and post-collisional Permian granites that form the main granite axis of the Urals.The post-collisional peraluminous Permian granites are not considered permissive for porphyry deposits.The deformed and metamorphosed fragments of oceanic island-arc rocks and Precambrian and Paleozoic continental rocks of the EUZ may represent an accretionary complex that accumulated in front of the Carboniferous Valerianovka arc, a continental arc on the western margin of the Kazakh craton that lies within the Transuralian zone.The Early to Middle Carboniferous Valerianovka arc on the Kazakh plate is an Andean-type continental-margin arc formed by subduction of Transuralian basin rocks along an east-dipping subduction zone as the Uralian Ocean was closing.The Troitsk fault zone is the boundary between the East Uralian zone and the Transuralian zone to the east.Outcrops of Valerianovka arc rocks are sparse due to extensive cover by Mesozoic and younger rocks, with the best exposures being found in the southeastern Ural Mountains in Kazakhstan.Permissive tracts for Phanerozoic porphyry copper deposits in the Urals were constructed in a GIS by selecting map units that contain permissive intrusive and extrusive rocks within each tectonic zone from digital geologic maps of the Urals.Permissive intrusive lithologies include gabbro, gabbrodiorite, quartz diorite, diorite, quartz monzonite, monzonite, granodiorite, plagiogranite, syenite, and porphyritic variants.Porphyry copper deposits typically are not associated with gabbroic rocks.However, in the Urals, gabbros are associated with permissive diorite and plagiogranite complexes as well as with non-permissive ultramafic complexes.Stratified map units that include extrusive rocks of intermediate composition, such as andesite and dacite, and stratified formations that are described in the literature as host rocks for known porphyry copper deposits in the region were also selected.Phanerozoic granitoids generally decrease in age from west to east across the Urals in the north based on ages assigned to map units.In the southern Urals, where Carboniferous granitoids lie to the east of the older rocks, the eastward younging trend is punctuated by Permian granites that define the Main Granite Axis of the Urals and older rocks that may represent allochthonous island-arc fragments.Recent radiometric dating studies on porphyry deposits and associated plutons show that some deposits that were originally considered to be Devonian in age are proving to be Silurian.For example, Grabezhev et al. acquired a SHRIMP U-Pb zircon age of 428 ± 3 Ma for diorite at the Tomino porphyry copper deposit and an age of 427 ± 6 Ma for an epithermal Au-Ag deposit in the same volcanoplutonic complex.Most of the other porphyry systems that have been dated by this method have proven to be Devonian, such as the 381 ± 5 Ma Voznesenskoe porphyry copper and the 374 ± 3 Ma Yubileinoe Au porphyry.Tract boundaries are based primarily on the megazone boundaries of Puchkov.The eastern boundaries of tracts that border the West Siberian Basin are subjective.Permissive rocks may extend an unknown distance to the east under post-Paleozoic cover.Western tract boundaries are defined by megazone-bounding faults.The tracts contain “holes” that represent non-permissive rocks that were excluded from the tracts.The holes represent Precambrian basement rocks, ultramafic rocks, and deep-seated Permian granitoids, such as the Dzhabyk batholith.These deep-seated, anatectic granitoids are unlikely to host porphyry copper deposits, which typically form at shallow crustal levels.Four geographic areas are delineated as permissive tracts for Paleozoic porphyry copper deposits east of the Main Uralian Fault in the Urals.Each tectonic zone, and therefore each permissive tract, may contain one or more magmatic arcs or arc fragments.Tract areas were calculated in a GIS using an equal area projection.Table 2 lists each tract area, along with the approximate percentage of each tract that is occupied by permissive rock, Precambrian basement, and post-Early Carboniferous sedimentary cover rocks based on analysis of the geologic map of Petrov et al.Identified resources for porphyry copper deposits in the Urals, i.e., those deposits that have well-defined tonnages and ore grades, are summarized in Table 1.Prospects associated with each permissive tract are listed in Table 3, and briefly described below.In both tables, deposits and prospects are listed from north to south within each tract.See Plotinskaya et al. for more information on individual porphyry copper occurrences and ages.Most of the Polar Urals area is occupied by Precambrian complexes associated with the Neoproterozoic Timanide stage of development of the Urals.Examples of porphyry copper deposits and prospects are rare in the Cis- and Polar Urals.No permissive tract was drawn for the areas west of the Main Uralian Fault in the Central Urals zone because of uncertainties about exact location, age, and nature of the only porphyry copper deposit with reported resources, which is Lekyn-Talbei.The approximate location of the deposit is west of the Paleozoic Tagil-Polar tract.Lekyn-Talbei has been described as a poorly explored porphyry copper deposit formed in an island-arc setting similar to the setting for deposits further south and as a porphyry molybdenum-copper occurrence in exposed northern parts of the Central Uralian zone.The deposit is assigned an age range of 362 to 207 Ma based on K-Ar dating of sericite.However, Plotinskaya et al. suggest a more likely Vendian age based on geology, pending further investigations.The deposit consists of lens-shaped lodes, veins, and disseminated mineralization associated with the,Lekyn-Talbey volcanic complex.Reported resources include 251.7 kt of copper and 4.2 kt of molybdenum in the C1 category along with P1 resources of 830 kt copper and 13.8 kt of molybdenum.Plotinskaya et al. cited inferred resources of 85.6 Mt of ore, 0.46 Mt of copper, 7.6 kt of molybdenum, 12 t of gold and 100 t of silver.The Tagil-Polar Urals tract includes two segments.The northernmost Polar Urals segment outlines Phanerozoic igneous rocks in the Polar region east of the Main Uralian Fault that are associated with a magnetic high area that lies just south of an arm of the Arctic Ocean.In some studies, this area is included as a northern extension of volcanic sequences associated with the Tagil arc.Intrusive rocks mapped in the tract area include Silurian to Middle Devonian diorite, quartz diorite, and granodiorite.The larger segment of the Tagil-Polar Urals tract includes Ordovician through Devonian intrusive and stratified rocks that lie east of the Main Uralian Fault in an area extending from the Middle Urals and extending through the North and Cis-Polar Urals.The eastern boundary of the tract is based on an approximation of the extent of permissive rocks under shallow cover below the West Siberian Basin.Permissive intrusive rocks include alkaline gabbro, diorite, gabbro, gabbrodiorite, granodiorite, plagiogranite, quartz diorite, syenite, and syenodiorite.Stratified rocks that mention andesite, basaltic andesite, basalt, and trachyte in lithologic descriptions are included as permissive extrusive rocks.Ultramafic complexes are excluded.Silurian andesitic rocks are overlain by Lower Devonian trachytes and volcaniclastic rocks to the east, which are in turn, overlain by 2 km of Devonian limestone and locally intercalated calc-alkaline volcanic rocks.The calc-alkaline magmatism in the eastern part of the Tagil arc continued into the Late Devonian, overlapping in time with the Magnitogorsk magmatism to the south.According to Puchkov, Tagil arc magmatism ceased in the Early Devonian, but the collision with the European continent did not occur until Late Devonian or Early Carboniferous time.Therefore, Late Devonian-Early Carboniferous igneous rocks within the Tagil zone may be products of syn- to post-collisional magmatism.These younger rocks include lithologies that are permissive for porphyry copper deposits.Prospects are listed in Table 3 and plotted on Fig. 5.Novogodnee-Monto is an oxidized Au-Cu magnetite skarn and porphyry prospect area in a complex geologic setting in the Polar Urals.Both an Early Devonian gabbroic to dioritic calc-alkaline suite and a Late Devonian-Early Carboniferous monzonite porphyry associated with a potassic suite are preserved.The older rocks appear to indicate an island-arc setting.Gold and copper mineralization overprints earlier magnetite skarn and forms porphyry-style stockworks that are associated with the younger, post-subduction magmatism.The skarns are cut by monzonitic rocks of the younger potassic suite, which are pervasively altered and chalcopyrite-bearing.Yanaslor is classified as a Cu-Mo-Au porphyry copper occurrence.Reported grades of 0.3–0.4% Cu and 0.002% Mo are characteristic of porphyry copper deposits; no tonnage data are available.Gumeshevskoe is a skarn-porphyry system associated with Middle Devonian quartz diorite and porphyry diorite dikes.Ore mineral assemblages include chalcopyrite, magnetite, bornite, and pyrrhotite, with ore Cu/Mo ratios ranging from 600 to 1700."The Krasnotur'insk copper skarn ore field contains magnetite skarns associated with quartz diorite, diorite, and gabbrodiorite; small diorite plutons host disseminated porphyry copper-style mineralization.U/Pb zircon ages for quartz diorite of 404 Ma suggest an Early Devonian age for the porphyry mineralization.The deposit type for the many other copper occurrences in the Tagil-Polar Urals tract is unknown; many of these occurrences may represent VMS-style mineralization.The tract includes > 40 iron skarns, which may or may not be indicators of a porphyry environment.The Magnitogorsk arc in the Magnitogorsk tectonic zone is the best exposed volcanic arc segment in the Urals and defines the Magnitogorsk tract.The development of the Middle to Late Devonian Magnitogorsk intra-oceanic volcanic arc in the southern Urals was synchronous with the break-up of the Tagil arc to the north.The permissive tract is bounded by the Main Uralian Fault on the west and the Magnitogorsk fault zone on the east, and was delineated primarily on the basis of maps shown in Herrington et al. and Puchkov.The tract includes Middle through Late Devonian gabbro, diorite, granodiorite, plagiogranite, gabbrodiorite, quartz diorite, alkaline gabbro, syenodiorite, syenite, minor rhyodacite, and rhyolite.Stratified units include tholeiitic and calc-alkaline rocks of the Irendyk Formation in the west and Karamalytash Formation in the east.Ultramafic complexes are excluded.Yubileinoe is an undeveloped, reportedly economic Cu-Au porphyry system associated with a small plagiogranite porphyry stock in the southern part of the tract.The deposit was discovered in 1961 and has been drilled to a depth of 600 m. Mineralization at Yubileinoe formed as skarn and also in potassic and phyllic alteration zones; stockworks are commonly localized along intersecting fault zones.Reported resources include 10 Mt of ore at an average grade of 0.41% Cu and 6.6 g/t Au.Gold grades range from 3 to 11 g/t Au, and 65 g/t Ag.Shatov et al. reported that the deposit has produced 0.832 Moz of gold at 6.5 g/t Au and noted that Sun Gold estimated resources at 82.8 Mt of ore at 1.7 g/t Au and 0.15% Cu.The deposit includes several ore bodies: an 80–240 m long and 9–12 m thick western ore body along an intrusive contact; a northern, volcanic-hosted ore body; a southeastern ore body along contacts of a plagiogranite porphyry stock; and a central ore body within the stock.A U-Pb zircon age of 374 ± 3 Ma from altered porphyry in drill core at Yubileinoe, along with ages and geochemical analyses of ore-bearing granitoids in the Magnitogorsk and Tagil tectonic zones, suggests that porphyry deposits in the region may have formed over a protracted period of time from the Middle Devonian to the Early Carboniferous.Older island-arc related Cu-Au deposits were followed by more Mo-rich deposits in the Early Carboniferous.Grabezhev showed that the geochemistry of granitoids associated with porphyry copper systems in the Magnitogorsk and Tagil zones indicate an increasing crustal component in source material as the arcs evolved, as reflected by increasing SiO2, K2O, Rb, REE andt and decreasingt.Salavat is an undeveloped Devonian porphyry Cu-Au prospect in calc-alkalic basaltic andesite of the Irendyk Formation associated with co-magmatic granodiorite and diorite stocks and dikes.It is described as a medium-size deposit with average grades of 0.5% Cu, 0.003% Mo; 0.01 to 0.05 ppm Au is reported in pyrite.Ore contains < 0.01 to 0.47 ppm Re.A resource has not been delineated at Salavat.Other prospects in the tract include the Late Devonian Voznesensk Cu-Mo-Au porphyry in altered quartz diorite and a small prospect reported at Dunguray.The tract hosts a variety of different types of significant VMS deposits that have produced most of the copper in the region and magnetite skarns that fueled iron and steel production in Magnitogorsk in the early 1900s.A few small to medium size volcanic-associated epithermal gold deposits are present in the northern part of the tract.The East Uralian Zone represents the suture between the East European craton and the Kazakh craton.The EUZ is an intensely deformed and metamorphosed belt of rocks derived from both the Eurasian and Kazakh cratons and intervening ocean.Two stages of magmatism are recognized within the EUZ: an early stage of Late Devonian to Early Carboniferous calc-alkalic intermediate-composition intrusions Late Carboniferous and Permian granitoid batholiths, along with diorite and gabbro intrusions.Late Carboniferous and Permian magmatism produced voluminous 275–290 Ma granite batholiths, including two-mica granites and associated Be- and Ta- pegmatites.The known porphyry copper deposits in the East Uralian tectonic zone are associated with the older Late Devonian-Early Carboniferous intermediate-composition calc-alkalic rocks.These permissive Late Devonian and Early Carboniferous lithologies were used to define the East Uralian tract, which also includes older Ordovician through Middle Devonian rocks.The “holes” in the tract represent Permian granites that are not permissive for porphyry copper deposits.The western tract boundary is the approximate location of the East Magnitogorsk and Serov-Mauk fault zones.The eastern boundary is the approximate location of the Troitsk fault in the southern Urals; in the north, the boundary with the Transuralian zone to the east is covered by the West Siberian Basin.The East Uralian tract hosts Birgilda-Tomino ore cluster, which includes the large Tomino and the smaller Birgilda deposits.These porphyry copper deposits are associated with a Silurian igneous complex of hydrothermally altered calc-alkaline diorite porphyry stocks, subvolcanic andesite porphyries, and extrusive rocks.The ore cluster includes porphyry, epithermal, and skarn deposits in a series of five mineralized zones along a 40 km by 20–25 km north-trending zone.The Tomino, North Tomino, and Kalinovskoe systems occur within a 10 km-long by 5–6 km wide zone.The Birgilda deposit is spatially separated from the shallower Tomino area by the Michurino zone, which hosts the Bereznyakovskoe Au-Ag epithermal deposits and by the Yaguzak zone.Uneconomic Cu-Mo-Au prospects that may be associated with younger Carboniferous monzogranodiorite porphyries are present in the Yaguzak zone.Alapaevesk and Artemovsk are described as small porphyry copper prospects associated with diorite-quartz diorite-plagiogranite plutons in the 100-km-long Alapaevsk-Sukhoi Log porphyry copper zone in the Middle Urals.Several porphyry copper prospects occur within the tract, as well as the 299 Ma Talitsa porphyry molybdenum deposit.Talitsa was considered an incompletely explored prospect at the time of our assessment; however, an ore tonnage of 129 Mt at 0.055% Mo and 0.11% Cu is cited by Plotinskaya et al.The Transuralian tract includes remnants of at least three Devonian and Carboniferous calc-alkaline volcano-plutonic complexes within the Transuralian tectonic zone: the Irgizskaya and Alexandrovskaya arcs on the west and the Valerianovka arc to the east.Most of the tract is in western Kazakhstan and covered by post-Carboniferous sedimentary rocks.The western boundary of the Transuralian tract is the boundary between the Transuralian zone and the East Uralian zone along the Troitsk fault.The eastern boundary is partly along the Anapov fault as shown by Hawkins et al. and the boundary identified Herrington et al. as the approximate boundary of Mesozoic cover under the West Siberian basin.Our Transuralian tract partly overlaps the larger tract previously described by Berger et al. as the Valerianovka arc in a porphyry copper assessment of western Central Asia.However, we did not incorporate those results in the current assessment of the Transuralian tract.Seltmann et al. show porphyry copper occurrences in the Transuralian zone, but descriptive information is not available for most of them.Zhukov et al. describe three of the occurrences—Bataly, Benkala North, and Spiridonovskoe.Varvarinskoe, listed as a porphyry copper deposit by Singer et al., is classified as a gold-copper skarn deposit by Zhukov et al.Vavarinskoe is associated with the Alexandrovskaya arc within the Transuralian zone.Taranovskoe is described as a medium-size porphyry copper deposit.Tarutino is a skarn and porphyry Cu-Mo prospect in the western part of the tract.Ore was deposited in two stages: an early stage skarn and chalcopyrite-pyrite mineralization in quartz diorite and porphyritic diorite, and a later stage of molybdenite mineralization in granodiorite.The deposit was discovered in 1995.In 2006, Eureka Mining evaluated the deposit and concluded that it was uneconomic.Various estimates of total resources have been reported.The high copper grade reported suggests that the resources apply mostly to skarn portions of the deposit.Results from a 2013–2014 drilling campaign were used to estimate JORC-compliant total measured, indicated, and inferred resources of 10 Mt or ore with an average grade of 0.99% copper.Previous estimates cited much larger ore tonnages, similar copper grades, and grades for gold, silver, molybdenum, and iron.Mikheevskoe, the only porphyry copper deposit that has been developed in the Urals in Russia, is the largest deposit in the study area with over 2 million metric tons of contained copper.The deposit is associated with Late Devonian to Carboniferous quartz diorite, diorite, granodiorite, and subvolcanic intrusions within an ore field that includes other porphyry occurrences.It was discovered in 1997 and mining began in 2013.Ore is concentrated in a 0.5 by 3 km area between two large diorite stocks.Chalcopyrite is the main copper ore mineral, along with bornite.Alteration assemblages include potassic, Ca-sodic, phyllic, and propylitic.A detailed study of the rhenium distributions in ore and molybdenite showed the ore grades typically are < 0.5 g/t Re and that the Re grade is correlated with Mo grade.Benkala, also known as Benkala North is associated with Early Carboniferous tuffaceous sands and silts and volcanic rocks intruded by Early to middle Carboniferous porphyritic quartz diorites and granodiorites.Zhukov et al. show the Benkala mineralization as oval in plan with the long axis over 1 km long and the short axis 500–700 m wide.A thin supergene blanket overlies the deposit.Benkala was discovered in 1968 and drilled during Soviet exploration from 1976 to 1979.Open pit mining of near-surface oxide ore began in 2012.Production of cathode copper by SX-EW increased from 792 t of copper to 1702 t in 2013.JORC-compliant 2011 measured, indicated, and inferred oxide and sulfide resources were reported as 361,916,000 metric tons of ore at an average grade of 0.41% Cu based on a cutoff grade of 0.25% Cu.Reserve estimates based on Soviet era drilling indicated average grades of 0.008% Mo and 0.17 g/t silver in primary ore.The Benkala South prospect, located 10 km from the Benkala project, has preliminary resource estimates of 95,000 metric tons of oxide copper and 515,000 metric tons of sulfide copper.The data are based on 1979 Soviet estimates at a copper cutoff grade of 0.25%.Plans for both projects include mining oxide ores to a depth of 100 to 150 m followed by mining primary sulfide ore.Bataly is a copper-molybdenum prospect in a complex of granodiorite and granodiorite porphyry intrusions north of Benkala.Zhukov et al. describe a paleovolcanic structure of two neck-like zones of intrusive rock that coalesce at depth into a single, larger composite intrusion of regional dimensions.Zhukov et al. note that the deposit is not completely drilled at depth and warrants further exploration.Varvarinskoe, a few kilometers north-northeast of the Bataly, was classified as a porphyry copper-gold deposit.Varvarinskoe has been classified in various ways by different investigators; Dodd et al. classified it is as a skarn.The primary ore occurs as stratiform massive and disseminated sulfides, as alterations of garnet-pyroxene skarn in calcareous volcanic rocks, marbleized limestone, and volcanic breccias.A second ore type consists of vein and disseminated sulfides and stockworks at the contacts of porphyritic diorite and serpentinite intrusions and tectonic breccias.The deposit is oxidized at the surface and some supergene mineralization occurs.Dodd et al. note that there is additional reserve potential at depth beyond what has already been delineated, but a possible relationship to an underlying granodiorite stock is apparently wholly speculative.The deposit was discovered in 1981 and developed as an open-pit mine by European Minerals Corporation, and subsequently by Orsu Metals.Mining started in 2006.The mine was acquired by Polymetal International plc in 2009 and is in production as the Varvara Mine with mine life projected to 2030.The deposit produced 6900 t of copper in 2011.Spiridonovskoe is a porphyry Cu-Mo prospect located between Benkala and Bataly.The age of the deposit is equivocal.Zhukov et al. associate it with tuffs and diorite to granodiorite intrusive rocks of a Late Silurian to Early Devonian massif.Based on geology, Plotinskaya et al. assign a Late Carboniferous age and report grades of 0.55% copper, with Mo and Au.Magnetite-copper skarn deposits are associated with the intrusive complexes that host many of the porphyry copper occurrences in the tract area, such as Benkala.Some iron-copper skarn deposits may be considered as a favorable indication for the possibility of an undiscovered porphyry-style deposit in the same intrusive complex in which the skarns occur.However, the classification of the huge magnetite deposits in the Transuralian zone as simple iron skarns or as porphyry-related is problematic.These deposits have extensive scapolite alteration, are described as distal to associated mafic intrusions, and may be better described as analogs to the Kiruna-type magnetite deposits of Sweden or as variants of IOCG deposits.Copper, gold, and silver are reported at Kachar; most of the other skarns only contain iron.All of the known VMS deposits occur to the west of the tract.Therefore, the numerous copper occurrences shown on Fig. 8 may be porphyry-related whereas many copper occurrences in other tracts are as likely to be associated with VMS deposits as with porphyry–skarn systems, pending further investigation.The coefficient of variation is often reported as percent relative variation.Thus, the final team estimates reflect both the favorability and the uncertainty of what may exist in the tract.The following sections describe the rationale for estimates of numbers of undiscovered porphyry copper deposits for each permissive tract along with some of the data considered in arriving at the estimates and the estimates.The Tagil arc extends from the Middle Urals to the Polar Urals region.The area is remote and poorly studied relative to the southern Urals.Permissive igneous intrusions and stratified map units that include volcanic lithologies that could be associated with porphyry copper deposits crop out over about half of the tract area.Although porphyry copper resources are not yet thoroughly delineated in the tract, three porphyry-skarn prospects, eight copper skarn occurrences, and 63 occurrences where copper is reported as a major commodity are present.In addition, Plotinskaya et al. mention 3 additional porphyry-related deposits in the North to Middle Urals in the Tagil-Magnitogorsk megaterrane.If fully explored, identified resources may become available for these and other prospects, all of which presently are indicative of undiscovered copper resources in the tract.Copper occurrences may or may not be indicative of a porphyry system; volcanogenic massive sulfide deposits throughout the Urals are also copper-rich.Prognostic studies were done by Urals geologists and by the Central Research Geological Exploration Institute of Nonferrous and Precious Metals in Moscow for many years with a focus on VMS deposits, and exploration for porphyry copper deposits was done during the years 1974 to 1984.Some parts of the tract include coeval intrusive and extrusive permissive rocks suggesting an appropriate depth of porphyry preservation.Other parts of the tract are primarily intrusive and may be too deeply eroded to preserve many porphyry deposits.Estimates of a 90% chance of 1 or more deposits, a 50% chance of 3 or more deposits, and a 10% chance of 8 or more deposits resulted in a mean of 4.4 undiscovered deposits for the 49,600 km2 tract area.The relatively high coefficient of variation associated with the probability distribution for undiscovered deposits within the tract reflects a relatively high degree of uncertainty for this little-known area, much of which consists of undifferentiated early Paleozoic rocks.The Magnitogorsk tract has been extensively explored studied for VMS deposits.Although both VMS and magnetite skarn deposits have been the main exploration focus in the area, a number of small porphyry copper prospects are known but have not been developed.Resources of 41,000 metric tons of contained copper are reported for the Yubileinoe deposit.Almost 40% of the tract is covered by rocks that are younger than Late Devonian.About a third of the tract area is occupied by permissive igneous rocks exposed at the surface, and more than half of those are stratified map units that include volcanic rocks.The presence of coeval intrusive and extrusive rocks suggests that the level of exposure preserved within the tract area is optimal for preservation of porphyry copper deposits.The tract hosts 64 copper occurrences.Most of these occurrences only report Cu; two are described as Cu-Au and two as Cu-Mo.Most of the skarns in the tract area are magnetite skarns; one is a W-Cu-Mo skarn.The team concluded that Salavat is an incompletely explored prospect that upon further investigation is likely to become a viable deposit.In addition, Yubileinoe may not be fully delineated.With an area of 49,320 km2, the Magnitogorsk tract is the about the same size as the Tagil-Polar Urals tract to the north.The team estimated a 90% chance of 2 or more undiscovered deposits, a 50% chance of 4 or more deposits and a 10% chance of 6 or more deposits.The relatively low coefficient of variation associated with the probability distribution for undiscovered deposits within the tract and the relatively high deposit density of about 10 deposits 100,000 km2 indicate that the team considered the tract favorable for the occurrence of undiscovered porphyry copper deposits.The East Uralian tract hosts the Birgilda and Tomino porphyry copper deposits and the largest number of important porphyry copper prospects.The tract is more thoroughly explored than the tracts to the north.About a third of the tract area is occupied by permissive igneous rocks exposed at the surface and the ratio of permissive intrusive rocks to extrusive rocks is about 3 to 1, suggesting that some parts are more deeply eroded than others.About 20% of the tract is characterized by exposed Precambrian rocks and sedimentary rocks younger then Early Carboniferous cover 35% of the tract area.The tract hosts two copper skarns and an additional 55 copper occurrences that are listed in the database of Petrov et al.Of these, 11 are Cu-Mo occurrences and two others list Cu and Au.The recognition of large zones of porphyry-type mineralization within the tract suggests that additional deposits are likely to exist within the tract.The team estimated a 90% chance of 3 or more undiscovered deposits, a 50% chance of 5 or more deposits and a 10% chance of 10 or more deposits for a mean of 6.5 undiscovered porphyry copper deposits."Estimates at the lower percentiles reflect the team's conclusion that some proportion of the skarns and many copper occurrences within the tract could be associated with porphyry copper systems, and that some systems may exist in completely unexplored areas.A relatively low coefficient of variation is associated with the probability distribution for undiscovered deposits because of the extensive exploration in this tract.Porphyry copper occurrences and potentially linked deposit types in the Transuralian tract are possible indicators of undiscovered deposits.The depth of erosion of the terrane east of the EUZ was not excessive, and because of low topographic relief, poor bedrock exposures, and remoteness of much of the area, there were many prospective areas left unexplored despite some Soviet-era investment in the region.The consensus was that less than a tenth of the delineated tract was of the correct age and/or the correct igneous rock types to host porphyry copper deposits.However, some of the known deposits are completely covered and younger sediments cover > 70% of the tract area.In addition to the 4 known deposits, the tract hosts 4 porphyry copper prospects, 2 copper skarns, and 49 other copper occurrences.The team estimated a mean of 6.1 undiscovered deposits, based on a 90% chance of 2 or more deposits, a 50% chance of 4 deposits, a 10% chance of 10 deposits, and lower probabilities of as many as 18 or 32 deposits.The high coefficient of variation, associated with the probability distribution for undiscovered deposit within the tracts reflects a relatively high degree of uncertainty.Consensus estimates for each permissive tract were combined with the global porphyry copper grade and tonnage model of Singer et al. in a Monte Carlo simulation using the EMINERS computer program.The resulting probabilistic estimates of amounts of in-place resources that could be associated with undiscovered deposits within each tract are listed in Table 6.Monte Carlo simulation results for each commodity in each tract are shown graphically in Fig. 9, using a log scale for the amount of each material.The probability distributions display a range of possible amounts of undiscovered resources for each commodity and for bulk ore tonnage.Note that there is some probability of no resources for all commodities and all tracts.For example, the probability of no undiscovered copper for the Tagil tract is 0.06.In addition, the mean is greater than the median in all cases.A mean of 17 million metric tons of copper is estimated for both the Tagil and Magnitogorsk tracts.The Tagil tract is about the same size as the Magnitogorsk tract, each of the tracts contains three known porphyry copper prospects, and the amount of exposed permissive rock is similar.The higher coefficient of variance associated with the estimates for the Tagil-Polar tract reflects a higher degree of uncertainty although the mean number of undiscovered deposits is about the same.The mean amounts of copper in undiscovered deposits in both the East Uralian and Transuralian tracts are higher, at 24 Mt in each.In addition to copper, the simulation results imply that significant amounts of molybdenum, gold, and silver may also occur.The ratio of mean undiscovered copper resources to identified porphyry copper resources indicates that about 14 times more copper in Paleozoic porphyry copper deposits than is presently known may exist in the Urals.Additional data on identified resources that were not available at the time of the assessment would, of course, change these ratios.Simplified engineering cost models, updated with a cost index, were used to estimate the economic fraction of resources contained in undiscovered porphyry copper deposits as described by Robinson and Menzie.The economic filters were computed using an Excel workbook developed by Robinson and Menzie.The 20-year average metal prices and metallurgical recovery rates were used in the filter calculations, along with the specified depth percentage and cost settings.The depth distribution affects the amount of material that would have to be moved to develop a mine.Some deposits are exposed at the surface.However, many are not and the amount of cover material that must be removed to access the ore adds to the cost of developing a mine.These costs can make the difference between an economic and an uneconomic deposit.For each tract, a subjective estimate of a hypothetical depth distribution of undiscovered deposits was made."These estimates refer to the part of the upper kilometer of the earth's crust that the tops of any undiscovered porphyry copper deposits are expected to lie within.As a default, 25% of the undiscovered deposits are accessible in the upper 250 m of the crust, 25% are accessible between 250 and 500 m, and 50% lie below 500 m but above 1 km.For areas that have significant amounts of volcanic rocks or other cover such as the Transuralian tract, it was assumed that a greater percentage of the undiscovered deposits would lie at deeper depths, so the distribution was skewed to: 10% of the undiscovered deposits in the upper 250 m of the crust, 30% between 250 and 500 m, and 60% below 500 m but above 1 km.The economic viability of a deposit also depends on the availability of existing infrastructure for developing a mine.As an independent guide to selecting appropriate cost settings for mining for each tract, we considered the infrastructure rankings compiled by the Fraser Institute for Russia and applied a ranking of “typical cost” setting to tracts that have existing regional infrastructure to support mining.For the Tagil tract, which includes very remote northern regions, a “high cost” setting was used.The Transuralian tract, where much of the permissive rock lies under cover, was also assumed to be a “high cost” setting.The other two tracts were considered a mix of “typical” and “high cost” settings.Application of the filter to the assessment results shows that about half of the mean amount of undiscovered copper could be economic based on the assumptions used in the modeling.Approximately 30 to 50% of the mean amounts of molybdenum, gold, and silver pass the filter as potentially economic commodities.The porphyry copper deposits of the Urals occur in both Russia and Kazakhstan.Russia ranked 7th in global copper mine production in 2014, with production of 850,000 metric tons of copper."Most of the production in Russia was from the magmatic sulfide-rich Ni-Cu-PGE deposits of the Noril'sk-Talnakh area.Since 2000, development activity has been reported at some porphyry copper deposits in the Urals in Russia.The Russian Copper Company is developing the mines and processing plants for the Mikheevskoe and Tomino porphyry copper deposits.Construction at Mikheevskoe began in 2011 as the largest new mining project to be constructed in Russia in recent times.An open pit mine planned for Tomino includes a processing plant slated to come on line in 2015 with a capacity to produce 52,000 t of copper concentrate per year.A hydrometallurgical pilot plant at Gumeshevskoe went into operation in 2005 to produce 5000 t of copper cathode per year.In Kazakhstan, which produced 430,000 metric tons of copper in 2014, most of the copper comes from porphyry copper deposits in the eastern part of the country and from sediment-hosted stratabound copper deposits at Dzhezkazgan in the Chu Sarysu Basin of south-central Kazakhstan.In the Urals area, copper production at Yubileinoe began in 2006, at Vavarinskoye in 2007, and at Benkala in 2012.Potentially economic copper resources in 22 undiscovered porphyry copper deposits in the four permissive tracts exceed the 6 Mt of identified porphyry copper resources in the Urals.These undiscovered resources are comparable in magnitude to the 30 Mt of copper reserves reported for Russia and 6 Mt of copper reserves reported for Kazakhstan for all deposit types in 2014.However, the discovery and potential development of undiscovered resources depends on continued exploration as well as social, economic, environmental, and political license to pursue mineral resource development.The complex geology of the Urals poses challenges for exploration.Much of the permissive geology lies under cover in the West Siberian Basin.Exposed areas in the southern Urals have low topography and deep weathering, and some areas are likely too deeply eroded to preserve porphyry copper deposits.The most prospective areas for additional porphyry copper deposits are the East Uralian and Transuralian tract areas, which host the most important deposits identified to date.Both the Magnitogorsk and Tagil-Polar tracts may represent future sources of copper in porphyry and porphyry-related skarn deposits.In addition to copper, undiscovered porphyry copper deposits in the Urals may contain significant amounts of molybdenum, gold, and silver."The fact that molybdenite in porphyry copper deposits provides most of the world's rhenium prompted a number of studies of the rhenium content of molybdenite in porphyry copper deposits of the Urals.The distribution of Paleozoic magmatic arc complexes that host porphyry copper deposits in Asia reflects the complex history of amalgamation of the major Paleozoic orogenic systems on the continent.The Uralides orogenic system records the amalgamation of island arcs accreted to the European craton and the continental margin of the Kazakh craton, whereas the predominantly east-west continental margin and island-arc complexes of the Central Asian Orogenic Belt mark the collision of the Siberian craton with craton blocks of China to the south and the closure of the PaleoAsian and Tethyan oceans.The southern Asia belts of Paleozoic rocks that are permissive for porphyry copper deposits include early Paleozoic island arcs and Devonian continental arcs related to north-directed subduction of the Paleotethys Ocean and accretion to the Central Asian Orogenic Belt.In contrast to the Urals, the other Paleozoic orogens of Asia have a long and complex post-Carboniferous tectonic and magmatic history related to Mesozoic evolution of the Pacific margin and ongoing Alpine-Himalayan tectonics along the Tethysides orogenic belt.Permissive tracts for Paleozoic porphyry copper deposits in continental Asia from the global mineral resource assessment are symbolized by dominant tectonic setting on Fig. 11.Many of the permissive tracts for porphyry copper deposits in Asia are large, generalized, and cover long time periods.In detail, many of the permissive tracts represent mixed and overprinted tectonic settings.Nevertheless, some broad patterns emerge.The world class Devonian Oyu Tolgoi porphyry copper deposit in Mongolia is associated with primitive island-arc rocks in an area where Paleozoic island- and continental-margin arcs accreted to the margin of Asia.Oyu Tolgoi remained buried until Cretaceous uplift brought the deposit near to the surface.A similar juxtaposition of Paleozoic accreted island- and continental- margins arcs occurs in the eastern Urals, where the Mesozoic and Cenozoic sedimentary rocks of the West Siberian Basin may conceal deeply buried undiscovered porphyry copper deposits within the Paleozoic basement, especially along the westernmost margins of the basin.Many porphyry copper deposits throughout the world are valued more for their gold or molybdenum than for their copper.Commodities reported for porphyry copper occurrences in the Urals suggest that both Mo-rich and Au-rich systems are present in all of the permissive tracts.An analysis of distinctions in metal contents in global porphyry copper deposits as a function of tectonic setting showed that tonnages and copper grades are not significantly different among deposits formed in island arcs, continental-margin arcs, and postconvergent settings.However, average molybdenum grades are highest in deposits in postconvergent settings and lowest in island arc-hosted deposits.Average gold grades in deposits in postconvergent and island-arc settings are higher than those in deposits found in continental-margin arcs.The presence of some younger rocks with permissive lithologies for porphyry copper deposits in close proximity to the older island-arc rocks in the Tagil-Polar tract such as at Novogodnee-Monto suggests that both island arc and postconvergent porphyry copper mineralization occurred in the Urals, which may be reflected in apparent enrichments in both gold and molybdenum.Lack of information about metal grades however, precludes more detailed analysis.Use of larger-scale geologic maps and more detailed information on the relative amounts of permissive volcanic rocks included in map units for stratified rocks would help refine the tract delineations.Similarly, regional compilations of locations, age, and geochemistry of igneous rocks would help define permissive tectonic settings for porphyry copper deposits.No conflicts of interest.
A probabilistic mineral resource assessment of metal resources in undiscovered porphyry copper deposits of the Ural Mountains in Russia and Kazakhstan was done using a quantitative form of mineral resource assessment. Permissive tracts were delineated on the basis of mapped and inferred subsurface distributions of igneous rocks assigned to tectonic zones that include magmatic arcs where the occurrence of porphyry copper deposits within 1 km of the Earth's surface are possible. These permissive tracts outline four north-south trending volcano-plutonic belts in major structural zones of the Urals. From west to east, these include permissive lithologies for porphyry copper deposits associated with Paleozoic subduction-related island-arc complexes preserved in the Tagil and Magnitogorsk arcs, Paleozoic island-arc fragments and associated tonalite-granodiorite intrusions in the East Uralian zone, and Carboniferous continental-margin arcs developed on the Kazakh craton in the Transuralian zone. The tracts range from about 50,000 to 130,000 km2 in area. The Urals host 8 known porphyry copper deposits with total identified resources of about 6.4 million metric tons of copper, at least 20 additional porphyry copper prospect areas, and numerous copper-bearing skarns and copper occurrences. Probabilistic estimates predict a mean of 22 undiscovered porphyry copper deposits within the four permissive tracts delineated in the Urals. Combining estimates with established grade and tonnage models predicts a mean of 82 million metric tons of undiscovered copper. Application of an economic filter suggests that about half of that amount could be economically recoverable based on assumed depth distributions, availability of infrastructure, recovery rates, current metals prices, and investment environment.
455
The effects of autobiographical memory flexibility (MemFlex) training: An uncontrolled trial in individuals in remission from depression
Participants were 38 individuals aged 21–71 years who were in remission from depression.By this we mean that all participants had a lifetime diagnosis of Major Depressive Disorder but were not currently experiencing a Major Depressive Episode according to the Diagnostic and Statistical Manual of Mental Disorders criteria, as indexed by the Structured Clinical Interview for the DSM.Inter-rater reliability was completed for 25% of SCIDs, and 100% agreement was observed between raters on both MDE and MDD diagnosis.The sample was predominately White British and currently employed.Comorbid diagnoses were anxiety disorders, posttraumatic stress disorder, past binge eating disorder and past substance/alcohol abuse.The sample was predominately Caucasian and employed.As recommended for relapse prevention, 45% of participants were receiving medication.Three participants attended weekly support groups and three received monthly booster sessions of psychotherapy.Exclusion criteria were a current MDE, psychotic symptoms, current alcohol/drug dependence, reported significant suicidal ideation, or reported significant self-harm, all assessed by the SCID.Individuals with a prior diagnosis of a personality disorder, head trauma or organic brain lesions were also excluded.The MemFlex programme is predominately self-guided, and consists of one 45-min orientation session with a facilitator followed by six self-completed workbook sessions completed over a one month period.During the orientation session, information was provided about the negative, overgeneral autobiographical memory bias associated with depression.Individuals were encouraged to reflect on their personal experience of this bias.Participants were also educated about the role of autobiographical memory in everyday life and the different ways in which autobiographical memories are retrieved and used.Finally, the structure of the workbook was previewed, and practice exercises were completed with the aid of the facilitator.The participant was then assisted in setting a schedule for workbook completion.The guideline was that two sessions are completed per week over three weeks.Each of the six MemFlex workbook sessions follows an identical format.The theory behind a particular skill is presented, followed by examples, and finally, exercises for the individual to complete.The workbook presents three key autobiographical memory skills: Balancing, Elaboration and Flexibility.Balancing refers to restoring balance to the ease with which different types of personal memories can be retrieved, such that ease of recollection of positive and negative, specific and general memories, is comparable.As depression is associated with a predisposition toward negative, general recollection, the programme involves no practice in negative, generalised retrieval.After participants have developed their skills in accessing positive specific memories, the programme encourages elaboration of concrete details of those positive memories.Individuals are asked to describe internal experiences, such as bodily sensations and emotions, and situational detail, such as the presence of others and location of the event.This targets the problem that positive recollection in depression often lacks sensory and emotional richness and detail.Finally, the programme encourages flexibility in alternating between retrieving memories of different types.This is achieved by requiring the individual to begin with a general memory, and retrieve the related specific events, and vice versa.The individual is also trained to identify the context in which a specific memory is likely to be optimal, and in identifying when a general memory is optimal.The individual is given repeated practice via exercises targeting each of these three key skills.The assessor viewed the workbook at the post-intervention assessment to ensure that all sessions had been completed.Seventy six percent of participants completed the entire workbook and 84% completed four or more sessions.The AMT is a cued-retrieval assessment of overgeneral memory bias in which participants are asked to retrieve a specific memory to each of a set of cue words.Overgeneral memory is indexed by the number of specific memories retrieved during the task.We used parallel sets of the cue words compiled by Brittlebank, Scott, Williams, and Ferrier, which were matched on word frequency.Cue sets were counterbalanced between pre- and post-intervention assessments.At each assessment, participants were asked to provide a specific memory in response to six positive and six negative cue words.Cues were presented verbally and in written form.Three practice trials were completed prior to the test trials.A 30-s time limit was provided for memory retrieval.Participants were prompted once if they generated an unclear response.Responses were audio-recorded and coded as specific, general-extended, general-categoric, semantic association or as an omission.Inter-rater-reliability of this coding for a random selection of 15% of memories was good.The RRS is a subscale of the Response Styles Questionnaire.This self-report measure assesses the frequency of ruminative thoughts or actions during depressed mood, and can be divided into three different subscales: depression-related items, self-reflection, and brooding.Reliability and validity of the RRS is satisfactory."Internal consistency in this sample was high, Cronbach's α = .95. "The MEPS is a widely-used measure of the ability to think strategically; that is, to plan in consecutive steps to achieve a specific aim.We used the abbreviated version of the MEPS, which has adequate alternate form reliability with the full version.Participants were presented with a social problem and an outcome and asked to provide a stepwise solution to the situation that resulted in the provided outcome.The given scenarios were: gaining new friends in the neighbourhood; resolving a conflict with a friend; resolving a conflict with a partner; and getting along with the boss at work.Scenarios were counterbalanced between pre- and post-assessment, with two scenarios presented at each time point.The number of means was recorded, along with the effectiveness of the taken approach."Effectiveness for each scenario was defined using the criteria of D'Zurilla and Goldfried; namely, the maximization of favourable and minimization of unfavourable short-term and long-term consequences on a personal and social level.Inter-rater reliability was high regarding the relevant means and satisfactory for the overall effectiveness rating.The CAQ is a 21-item self-report measure indexing cognitive avoidance in the form of suppression, transformation, substitution, distraction, and avoidance, of distressing thoughts/images.We used the total score in all analyses.Higher scores indicate greater cognitive avoidance.The CAQ has adequate psychometric properties.Internal consistency in this sample for the total score was high, α = .95.The Verbal Fluency Task measures the fluency of information retrieval according to a set of discrete rules and was implemented as a general measure of fluency and executive control.Participants were asked to generate as many words as possible with reference to a specific category within 1 min, without repetition.Versions of the task were counterbalanced between assessment points.Scores were defined as the total number of words recalled minus the errors made.The Beck Depression Inventory is a widely-used 21-item, self-report measure of depression severity.The BDI-II has satisfactory validity and reliability.The Beck Hopelessness Scale was also administered to measure negative attitudes about the future, as reducing negativity in recollection of the past may also change the way one sees the future.The BHS consist of 20 true-false statements, and possesses good reliability and validity.The Beck Anxiety Inventory was also administered as the cognitive risk factors we targeted may also be associated with anxiety.The self-report scale is both reliable and valid.At baseline, we administered the Verbal Paired Associates and Digit Span tasks to assess whether the sample evidenced any general memory deficits as these may impact performance on our outcome measures, particularly the AMT.The VPA is a subtest of the Wechsler Memory Scale where participants learn eight pairs of non-associated words across four trials.Immediate and Delayed Recall was assessed to provide a measure of short-term and long-term memory.Digit Span is a subtest of the Wechsler Adult Intelligence Scale, which requires the individual to maintain and reorder numbers in working memory.Both forward and backward Digit Span tests were administered.Ethics approval was granted by the Cambridge Psychology Research Ethics Committee.We advertised for participants in local health services and through our research participation database.Potential participants completed a phone screening to assess eligibility, following which they were invited to visit the centre and complete the SCID.In a second session, pre-intervention measures were administered along with the workbook orientation.Participants were then sent home with the workbook, and were telephoned approximately 1.5 weeks later to address any arising questions or difficulties with the workbook.Participants returned to the centre approximately one month after the pre-intervention session to complete the post-intervention assessment.Participants received approximately £39 for participation, with the exact amount determined at an hourly rate.We were unable to contact two participants at post-intervention and thus experienced 5% attrition.Data were therefore analysed for 36 participants.Examination of pre-intervention measures indicated that the sample was in the normal range on our assessments of standard memory functioning; VPA Immediate Recall, VPA Delayed Recall, and Digit Span.Consistent with remitted depression, depression symptoms at pre-intervention on the BDI-II were in the ‘minimum’ range, as were anxiety symptoms on the BAI, and hopelessness scores on the BHS were in the ‘mild range’.Descriptive statistics are presented in Table 1.We used a repeated measures MANOVA with Time as the within-subjects factor, with appropriate follow-up univariate tests, to assess the family of proximal and intermediate effects of MemFlex.Descriptive statistics are presented in Table 1.We employed a one-tailed alpha as we had clear a priori predictions about the nature of the anticipated cognitive changes, and the stable nature of each trait and our prior experience with memory training programmes suggests it was highly unlikely that performance would worsen.A second exploratory MANOVA was used to assess change in symptoms.In a final MANOVA, we examined broader cognitive fluency, as indexed by verbal fluency."The first MANOVA showed a significant multivariate effect of Time, Pillai's Trace = 0.70, F = 9.20, p < .001.Follow-up univariate analyses provided support for a significant reduction in overgeneral memory bias, F = 7.89, p = .004, d = 0.48, an increase in the number of means used in problem solving, F = 6.24, p = .009, d = 0.55, and in the overall effectiveness of their problem solving, F = 3.34, p = .038, d = 0.33, and a decrease in cognitive avoidance, F = 3.05, p = .045, d = 0.18, albeit with a small effect size.Examination of the rumination subscales indicated that the decrease in overall rumination, F = 5.60, p = .012, d = 0.29, was driven by improvements in both the depressive, F = 7.87, p = .004, d = 0.33, and brooding subscales, F = 5.28, p = .014, d = 0.28.There was no significant change in the self-reflective subscale of rumination, F = 0.12, p = .367.In sum, significant improvements were observed across all cognitive risk factors."The second exploratory MANOVA examining changes in symptoms from pre- to post-intervention, generated a non significant multivariate effect of Time, Pillai's Trace = 0.08, F = 0.52, p = .757.Examination of the univariate output revealed no significant change from pre- to post-intervention for depression symptoms, F < 1, hopelessness, F < 1, or anxiety symptoms, F = 1.92, p = .087.Finally, we were interested in whether the programme would impact cognitive fluency more broadly.However, no change was observed in verbal fluency for the total score, F < 1, nor for errors, F < 1.This initial study has demonstrated the ability of a novel autobiographical memory-based intervention to modify multiple cognitive processes that are both linked to autobiographical memory problems and also implicated in the onset and maintenance of depression.We observed a reduction in overgeneral memory bias, rumination and cognitive avoidance, along with an improvement in social problem solving from pre- to post-intervention.MemFlex had a small-sized effect on cognitive avoidance, and a medium-sized effect on all other cognitive risk factors, with the exception of the self-reflective subscale of rumination where there was no significant change.No significant improvement was observed in residual psychological symptoms and effect sizes were trivial, but this was not unexpected and symptomatology was not the target of the programme as this was a remitted sample and symptom levels were not high.Recruitment of our sample from both health services and the wider community should see that these results generalise well to the wider remitted population.Developing novel interventions that have the potential to reduce the risk of depression is an important goal for clinical research.Providing evidence that MemFlex can impact proximal and intermediate cognitive outcomes that are known to drive depression provides a robust platform for larger later-stage trials, using a randomised design and a longer follow-up period, in both remitted and currently depressed samples.Such a trial for individuals acutely experiencing an episode of depression is now underway.The self-guided, workbook format of MemFlex is also an advantage as it may aid the accessibility of intervention.The programme is consequently considerably lower in administration cost than current, therapist-guided programmes targeting memory bias.In contrast to current cognitive interventions, MemFlex also does not require computer or internet access, and can be completed in a time and place that is most convenient for the individual.The workbook format also lends itself well to administration to large groups, such as school classes.The low-intensity format of the programme may be beneficial for those who have difficulty attending face-to-face appointments or be easily administered as an adjunct to more traditional depression treatments.For example, the improvement we observed in rigid cognitive biases suggests that completing MemFlex may provide a strong complement to more intense cognitive therapy.Although the results of this study are promising, there are some factors that will need to be addressed when developing MemFlex further.An outcome measure which indexes flexible movement between specific and general memories would provide a more fine-grained assessment of the flexibility of memory retrieval.Similarly, more extensive exploration of the effect of the programme on broader cognitive flexibility is needed, and a more direct measure of abstract and concrete information processing may index changes in processing flexibility.Completion of a Phase II randomised controlled trial with longer follow-up will determine whether MemFlex is able to reduce the occurrence of future depressive episodes.This study has provided initial evidence in support of a novel cognitive intervention, MemFlex, in reducing proximal and intermediate cognitive risk factors for depression.This programme has promise as a cost-effective, low-intensity option for reducing factors associated with depressive risk, and improving access to psychological intervention.This project was funded by the UK Medical Research Council Grant MC_US_A060_0019.The authors have no conflict of interest to declare.
Background and Objectives Impaired cognitive processing is a key feature of depression. Biases in autobiographical memory retrieval (in favour of negative and over-general memories) directly impact depression symptoms, but also influence downstream cognitive factors implicated in the onset and maintenance of the disorder. We introduce a novel cognitive intervention, MemFlex, which aims to correct these biases in memory retrieval and thereby modify key downstream cognitive risk and maintenance factors: rumination, impaired problem solving, and cognitive avoidance. Method Thirty eight adults with remitted Major Depressive Disorder completed MemFlex in an uncontrolled clinical trial. This involved an orientation session, followed by self-guided completion of six workbook-based sessions over one-month. Assessments of cognitive performance and depression symptoms were completed at pre- and post-intervention. Results Results demonstrated medium-sized effects of MemFlex in improving memory specificity and problem solving, and decreasing rumination, and a small effect in reducing cognitive avoidance. No significant change was observed in residual symptoms of depression. Limitations This study was an uncontrolled trial, and has provided initial evidence to support a larger-scale, randomized controlled trial. Conclusions These findings provide promising evidence for MemFlex as a cost-effective, low-intensity option for reducing cognitive risk associated with depression.
456
Airway management for symptomatic benign thyroid goiters with retropharyngeal involvement: Need for a surgical airway with report of 2 cases
Enlarging thyroid goiters often manifest with dyspnea and dysphagia.Underlying these symptoms are alterations to the laryngo-tracheal and cervical esophageal anatomy, due to compression and deviation.While these changes pose potential challenges with regard to intubation, the majority of patients with large, symptomatic goiters can be readily managed by straightforward oral intubation .However, the “difficult airway” is occasionally encountered and requires a team-based approach to safely securing the airway.The following is a report of two cases with enlarging thyroid goiters with significant retropharyngeal extension.Both cases involved multiple unsuccessful attempts at oral and nasotracheal intubation, and ultimately required surgical intervention in order to establish a secure airway.In patients whose goiters extend to the retropharyngeal compartment, the resulting anterior displacement of the larynx and the posterior hypopharyngeal wall obstructs the view of the glottis and the normal path of the endotracheal tube, thereby making conventional approaches unsuitable options to safely establish an airway.A 97-year-old female presented with a three-year history of progressive dyspnea, odynophagia and dysphagia.Her symptoms were exacerbated when lying supine.She had lost 20 pounds over a one-year span, and an esophagram showed impeded passage of a tablet through the proximal esophagus.Past medical history included diabetes mellitus, hypertension, hypercholesterolemia, and cardiac arrhythmia.On exam, the patient was in no acute distress, and exhibited mild biphasic stridor at rest.Her examination revealed massive anterior and bilateral thyroid enlargement, with significant tracheal deviation to the left.A computed tomography scan of the neck showed a markedly enlarged thyroid gland with impingement of the posterior wall of the hypopharynx and leftward deviation and narrowing of the trachea.The left lobe measured 10.3 × 4.6 × 3.6 cm, the right lobe measured 12.2 × 5.2 × 4.6 cm, and the isthmus measured 6.0 × 2.0 × 1.5 cm.In a cephalocaudal direction the thyroid gland extended from the level of the submandibular gland to 1 cm retrosternal.She underwent an ultrasound-guided fine needle aspiration for a thyroid nodule which was consistent with Bethesda II cytopathology.Pre-operatively, 10 mg of dexamethasone was intravenously administered to mitigate airway edema.Bag-mask ventilation was easily achieved upon induction with nitrous oxide and fentanyl.Direct laryngoscopy was then performed with multiple laryngoscopes, including a GlideScope, but without successful visualization beyond the epiglottis.After three failed attempts, fiberoptic nasotracheal intubation was attempted.However, the flexible scope was not able to circumvent the angled path to the anteriorly-displaced glottic opening.The patient remained stable without desaturation due to adequate bag-mask ventilation.At this point, the decision was made to proceed with a surgical airway.The size of the goiter and the isthmus made tracheal exposure challenging.Meticulous dissection and retraction of the thyroid allowed safe entry into the airway.Once the airway was successfully secured, a total thyroidectomy was performed uneventfully.In light of the large dead space created by the removal of the thyroid goiter, the decision was made to formalize the tracheal stoma in order to prevent contamination with tracheal secretions.A cuffed tracheostomy tube was then placed and the patient was observed in the intensive care unit.The tracheostomy tube was successfully downsized and capped for 24 hours and the patient was subsequently decannulated on post-operative day two.At the two-week follow-up, the stoma site had completely healed.Final pathology of the thyroid revealed focal low grade B-cell lymphoma with mucosa-associated lymphatic tissue.A 51-year-old female presented with a five-year history of an enlarging multinodular goiter.The patient described constant globus sensation as well as progressive dyspnea over the past year, exacerbated by supine positioning.On examination, a large goiter was readily evident and distorted the normal anatomic contour of the neck, but no significant stridor was appreciated.Biopsy of three thyroid nodules were interpreted as Bethesda class II."The patient's past medical history was significant for obesity, severe sleep apnea, hypertension, asthma and depression.A CT neck with contrast revealed primarily right-sided heterogeneous enlargement of the thyroid gland causing mass effect against the anterior and lateral trachea.There was cephalocaudal extension from retrosternal to the inferior aspect of the parapharyngeal space at the level of the right submandibular gland.The enlarged goiter measured 11.6 × 6.0 × 8.0 cm for the right lobe, and 4.5 × 1.5 × 1.8 cm for the left lobe.Pre-operatively, 10 mg of dexamethasone IV was administered to mitigate airway edema.The patient was induced with succinylcholine and was easily managed with mask ventilation.Direct laryngoscopy using a Miller and Mac blade were unsuccessful at visualizing the anteriorly and laterally displaced larynx.A combined approach using the Glidescope and nasal fiberoptic intubation was then attempted but visualization was poor due to the compressed hypopharynx and anterior displacement of the glottic inlet.Again, bag-mask ventilation was continued following each intubation attempt.After further discussion between the anesthesia and surgical teams, it was decided to proceed with establishing a surgical airway, which was successfully accomplished in a manner similar to case 1.The right lobectomy was then performed without complication.At the completion of thyroid lobectomy, the tracheostomy site was formalized to the skin."The patient's tracheostomy tube was downsized on post-operative day one and decannulated on post-operative day two.On follow-up, her stoma site had completely healed.Final pathology revealed a benign thyroid.Large thyroid goiters are associated with compressive symptoms of dysphagia and dyspnea, leading many patients to pursue surgical treatment.Because of these underlying symptoms, as well as a readily apparent large neck mass on examination, there is often concern regarding the ease of intubation in these patients.Multiple studies have attempted to identify pre-operative indicators of difficult intubation in thyroid goiter patients.Pre-operative compressive symptoms, goiter size, radiographic evidence of tracheal compression and/or deviation, and retrosternal involvement have been hypothesized to be potential factors associated with difficult intubation .However, none of these factors have proven to be associated with difficult intubation, and all patients in these studies were successfully intubated via a standard oral or a fiberoptic intubation approach.However, goiter involvement of the retropharyngeal compartment represents a unique clinical entity requiring heightened caution in managing the airway.In both of our cases described above, intubation attempts by experienced anesthesiologists and surgeons were unsuccessful.Despite multiple attempts with various laryngoscopes and fiberoptic scopes, visualization and access were complicated by hypopharyngeal crowding and anterior displacement of the larynx.Fortunately, both patients remained easily bag-mask ventilated and underwent tracheostomy in an expedited but non-emergent fashion.Thomas et al. also reported on the need for tracheotomy in a patient with retropharyngeal involvement of a thyroid goiter .In their case report, attempt at oral intubation was deferred due to obstruction noted on fiberoptic laryngoscopy pre-operatively.Other reports in the literature of retropharyngeal thyroid goiters do not discuss perioperative airway management .In situations with retropharyngeal goiters, we advocate for deferred attempts at intubation in favor of an immediate surgical airway.Attempting intubation, even by an awake fiberoptic method, can be extremely challenging if not impossible.Manipulation of the airway in these patients can lead to further airway compromise and the need to establish a surgical airway in an emergent fashion.In light of the potential difficulty in performing a tracheostomy in a patient with a large goiter and displaced larynx and trachea, the avoidance of intubation attempts helps to maintain safe ventilation.One disadvantage of tracheostomy versus oral intubation is the inability to place a nerve integrity monitor ETT through the glottis for intraoperative recurrent laryngeal nerve monitoring.This can potentially be circumvented by inserting an Eschmann or wire catheter in a retrograde fashion through the tracheostomy into the oral cavity and intubating with a nerve-monitoring tube .Post-operatively, formalization of the tracheostomy is essential in closing off the potential space created by the removal of a large goiter.This will help to decrease the risk of secretions from the trachea seeding the thyroid bed and thereby lead to abscess formation.In both of our cases, this complication was avoided and the tracheal stoma had completely healed within three weeks of decannulation.The surgeon, anesthesiologist and patient should be aware of the potential difficulty with intubation in thyroid goiters with retropharyngeal involvement.Attempts at visualization with direct laryngoscopy, GlideScope video laryngoscopy and fiberoptic laryngoscopy may prove unsuccessful and potentially lead to an unstable airway.An immediate and planned surgical airway is recommended as the initial approach to a safe general anesthesia.Management of the tracheal stoma following thyroidectomy by formalization is critical to avoid wound complications.
Background: Intubation prior to surgical intervention for thyroid goiters is typically straightforward and uneventful. However, retropharyngeal extension of thyroid goiters is a unique entity which is characterized by displacement of the hypopharynx and laryngeal deviation. Methods: Two patients presented with progressive compressive symptoms due to enlarging thyroid goiters. Imaging revealed thyroid goiters with significant retropharyngeal involvement causing anterior displacement of the larynx and hypopharynx. Results: Both patients were unsuccessfully intubated by direct laryngoscopy, GlideScope laryngoscopy and flexible fiberoptic laryngoscopy. Tracheostomy was performed to safely establish the airway, and thyroidectomy was subsequently performed uneventfully. Formalization of the tracheal stoma was performed on both patients to prevent soilage of the thyroid bed with tracheal secretions. Conclusions: Retropharyngeal involvement of thyroid goiters can pose significant difficulty with intubation. Airway compromise can be avoided by directly proceeding with a surgical airway. Management of the tracheal stoma is an important step in preventing postoperative infection.
457
An exploration of research information security data affecting organizational compliance
The sample included data collected from onsite research information security compliance reviews completed by the Veterans Health Administration Office of Research Oversight from the year 2009 through 2017.The purpose of these reviews was to evaluate VHA research programs adherence to federal and organizational information security requirements.103 research programs were evaluated with 10% of the sample size acquired from research programs located at VHA hospitals of lower complexity, 12% from research programs located at VHA hospitals of medium complexity, and 78% from research programs located at VHA hospitals of high complexity.Of the programs evaluated, over two thousand employees participated in the onsite reviews ranging from support to executive staff with the highest participation from the research program.Compliance and oversight staff accounted for 14% of employee participation and included Privacy Officers, Information Security Systems Officers, and Research Compliance Officers.Information collected during the onsite research information security compliance reviews were derived from in-depth interviews, document reviews, and physical evaluations of the research space including offices, laboratories, assigned clinical spaces, and server rooms.In addition, physical evaluations of certain data capable information technology equipment were completed as part of each review.Noncompliance for each site was documented in a site-specific report, and the data contained in those reports compiled and subjected to statistical analysis.In addition, anecdotal evidences contained in reviewer notes relating to the reasons for the noncompliance were also qualitatively aggregated.Onsite reports were reviewed and each finding of noncompliance placed in one of fifteen broad categories.Those categories were further distilled and the findings of noncompliance clustered based on similarity, and placed into seven primary groupings.The findings in each of the seven categories were then separated into three subcategories representing technological, procedural, and behavioral implications.For example, if an automated backup of research related data failed; the consequential finding was placed into the technological subcategory.Likewise, if the noncompliance was because of an erroneous policy or required form, that finding was placed in the procedural subcategory.Last, noncompliance as a direct consequence of an employee behavior such as the failure of research staff to properly store and/or transmit sensitive research data in compliance with established policy, the failure to report a research information security incident, or complete required training was relegated to the behavioral subcategory.The ensuing data are illustrated in Tables 3–7.1,For statistical analysis, frequency and cross-tabulation statistics were conducted to describe the sample and check for coding errors.Chi-square statistics were used to test for associations between complexity and noncompliance for each area of interest.Significant associations were reported using unadjusted odds ratios with 95% confidence intervals.Statistical significance was assumed at an alpha value of 0.05 and all analyses were conducted using the Statistical Package for the Social Sciences Version 22.Chi-square statistics found several significant differences in rates of noncompliance between the complexity groups.Research programs located at complex VHA hospitals were five times more likely to have procedural noncompliance with the use of external information systems versus research programs located at those VHA hospitals of lower complexity.Similarly, the trend was that research programs located at higher complex VHA hospitals were more likely to have higher rates of behavioral noncompliance versus those research programs located at VHA hospitals with a lower complexity in the categories of behavioral noncompliance associated with the use of external information systems, the management of research information, the use of mobile and portable devices, and the ISSO review of research projects.Higher levels of procedural noncompliance related to privacy related requirements were also observed in research programs located at more complex VHA hospitals versus those of lower complexity.The single exception to the trend involved technological noncompliance related to the management of research information where research programs located at more complex VHA hospitals were less likely to have noncompliance versus those programs located at VHA hospitals with a lower complexity.No significant differences were observed between those research programs located at VHA hospitals of a medium complexity and those with a lower complexity in terms of noncompliance for any area.Frequencies and percentages associated with noncompliance for each area of interest and by complexity are in Table 8.By far, the highest rates of noncompliance occurred in the behavioral category, and observed across all areas of analysis.In addition, rates of procedural noncompliance associated with the proper reporting of research information security incidents were above 40% for research programs at all VHA hospital levels.Public availability and further review and analysis of this data will expand the literature regarding information security compliance including those specific factors that directly impact organizational risk mitigation strategy and employee adherence .The identified trends will help inform information security compliance decisions regarding program development 4 and employee behavior; and may further inform decisions surrounding routine technological and procedural resources for detecting and mitigating information security risk .
In this article, data collected from onsite assessments of federal healthcare research programs were reviewed and analyzed. 103 research programs were evaluated for adherence to federal and organizational information security requirements and the data clustered into three primary compliance groupings, technological, procedural, and behavioral. Frequency and cross-tabulation statistics were conducted and chi-square statistics used to test for associations.
458
Next generation data systems and knowledge products to support agricultural producers and science-based policy decision making
In the introduction to this special issue, Antle et al. discuss the critical need for data, models and knowledge products that provide user-friendly data acquisition and analytical capability for decision makers.The use cases range from farm-level decision support, to the agricultural research community and donors making research investment decisions, to policy decision makers whose goal is the sustainable management of natural resources.Janssen et al. provide examples of data and information technology structures that illustrate how private and public data components could be developed for such use cases.Jones et al. argue that the most important current limitation is data, both for on-farm decision support and for research investment and policy decision making.One of the greatest data challenges is to obtain reliable data on farm management decision making both for current conditions and under scenarios of changing bio-physical and socio-economic conditions.This paper discusses how farm-level decision models can be used to support farm decision making and to provide data for landscape-scale models for policy analysis.In the second section of this paper we provide an overview of the kinds of information needed to support science-based policies for sustainable landscape management as well as improved on-farm management.We describe how existing decision support tools could be used to develop a data infrastructure that can provide this type of information.In sections three and four we describe a landscape-scale policy analysis tool and a farm-level decision support tool that could be used to support landscape scale and farm level decision-making.Section five illustrates the use of these tools with an analysis of the economic potential for a new oilseed crop, Camelina sativa, to be incorporated into the winter wheat-fallow system used in the U.S. Pacific Northwest.In the concluding section we discuss the challenges that will need to be addressed if these and other similar data and modeling tools are to be integrated into data and modeling platforms that could support new knowledge products for both farm and policy decision makers.Both governmental and non-governmental organizations have established a wide variety of data, knowledge and institutional arrangements that together constitute an “infrastructure” that supports management of agricultural landscapes.This physical and institutional infrastructure differs greatly around the world, but all have in common the very substantial challenge of acquiring timely, site-specific data and combining it with analytical tools to improve the quality of decision making from farm to landscape scales.To varying degrees, this decision making infrastructure has evolved in many countries along with public policy towards what we will describe as “science-based policy” – that is, policy designed to achieve the goal of sustainably managing agricultural landscapes as efficiently and effectively as possible given the best-available science and technology.A large and growing body of scientific knowledge from agricultural, environmental, economic and social science disciplines exists as a foundation on which a science-based policy for agriculture can be further advanced, starting with the idea that agriculture is a “managed ecosystem”."The scientific literature has established that farmers' land management decisions affect biological and physical systems through a number of mechanisms.Some effects, such as changes in soil productivity, may be limited to the land owned by the farmer; others, such as runoff into surface waters, may appear offsite.A key insight from this body of scientific literature is that agricultural productivity depends upon and plays a key role in providing a set of “ecosystem services” ranging from food production to the provision of clean water and maintenance of biodiversity.There are two types of policies and programs being used for agricultural landscape management often referred to as “conservation” and “working lands” policies, closely related to the ideas of “land sparing” and “land sharing” used by ecologists for wildlife management.In addition to managing agricultural landscapes, agricultural policy in many countries has also sought to improve the economic well-being of agricultural households through a variety of subsidy programs that transfer income from taxpayers to agricultural producers and landowners.The biofuel policy we discuss later in this paper is an example of a working land policy designed to produce environmental benefits by substituting biofuels for fossil fuels while maintaining food crop production."These and other types of domestic and trade policies may affect producers' land management decisions, and may either complement or conflict with the goals of sustainably managing agricultural landscapes.For example, the biofuel development program investigated later in this paper shows that subsidies may be required to achieve its goals of increasing biofuel crop production, but may also reduce food crop production and increase food prices.Both the resource efficiency and the distributional effects of policies are important to agricultural producers and to others in society, and need to be taken into account in designing science-based policies.Indeed, there are inevitably trade-offs among the various private and public goals related to the management of agricultural landscapes.A goal of the knowledge infrastructure needed to support science-based policy is to improve our understanding of these trade-offs so that stakeholders can make informed choices among policy alternatives and their likely impacts.Economics provides an analytical framework to evaluate the need for policy interventions, given sufficient physical, biological and economic data.In this framework, typically described as “benefit-cost analysis,” private outcomes are combined with the value of “non-market” outcomes, such as maintaining water quality and biodiversity, to determine the management strategy that yields the best outcome for society.In principle, if all policy options could be evaluated in this way, the best option could be identified.To implement this benefit-cost framework, however, both quantities and values of marketed goods are needed, as well as quantities and values of non-market outputs.While it is straightforward to measure and value market outcomes such as the amount and value of corn produced in a given area, it is difficult to quantify and value non-market outcomes such as changes in ecosystem.With adequate scientific understanding, spatially-relevant data and suitable measurement technologies, it is possible to objectively quantify the non-market.But in many cases valuing non-market outputs is exceedingly difficult.For example, contamination of water by nutrients such as nitrates may have adverse impacts on human health, and it may be possible to estimate the magnitude of these effects, but it is difficult to attach a monetary value to health effects that is generally accepted by the affected people and society.Similarly, ecosystem services such as biodiversity are difficult to quantify and value in monetary terms.For these reasons, strict application of the “benefit-cost analysis” approach to the design of science-based policies faces serious challenges.An alternative to benefit-cost analysis is what we refer to as “policy tradeoff analysis”.Rather than attempting to attach monetary values to ecosystem services, the tradeoff analysis approach defines a set of quantifiable economic, environmental and social “indicators” that can be used to assess the status of the agricultural landscape and outcomes associated with it.Alternative policies are evaluated in terms of the interactions among these indicators.In this approach, there is no one “solution” or best policy because different stakeholders may value tradeoffs between outcomes differently.However, the tradeoff analysis approach has the virtue of providing the various stakeholders with the information they need to make these value judgments.Tools suitable for policy tradeoff analysis are already being used in research and policy design.Many indicators have been developed for policy analysis.Various measures of farm household well-being are used, such as farm income and its distribution among geographic regions and among different types of farms.Measures of environmental outcomes and ecosystem services are available from direct measurements and from models, including soil quality and productivity, air and water quantity and quality, greenhouse gas emissions, and wildlife habitat.For example, the U.S. Department of Agriculture has constructed an “environmental benefits index” to assist in the design and implementation of conservation programs that combines a number of different environmental indicators into a summary measure.The increasing utilization of precision farming and mobile technologies, together with improvements in data management software, offer expanding opportunities for an integrated data infrastructure that links farm-level management decisions to site-specific bio-physical data and analytical tools to improve on-farm management.These farm-level data can be integrated with public data at the landscape scale for research and policy analysis.Analytical tools using data at the landscape scale could provide the improved understanding needed to support science-based policy and sustainable management of agricultural landscapes.Much of this growing volume of new data is private – for example, information about where and when agricultural operations occur, and their consequences.There is also a growing amount of public data, such as satellite imagery and weather and soil data, historical crop yields, and economic data.A critical feature of the new knowledge infrastructure is that it must be able to measure, store, manage and integrate both private and public data in ways that respect the privacy and proprietary interests of individuals while enabling diverse stakeholders to benefit from improved information and analyses.In addition to the need to be profitable and provide an acceptable standard of living for the farm household, farm decision making must increasingly respond to the requirements of environmental regulations and related public policies aiming to achieve more sustainable resource management.Farmers must also meet the demands by food companies and the public for assurance that sustainable and ethical practices are being used.All of these pressures – economic, environmental and social – create a need for better farm-level data and analytical tools.New technologies began to provide new sources of “big data” for farm management beginning with the automation of agriculture 1990s.Machinery including tractors, chemical applicators, and harvesters are now equipped with global positioning system controllers that can both control and track various aspects of the farm operations, and hand-held mobile devices as well as personal computers and management software provide managers with ways to enter information about management decisions and carry out analysis.Moreover, these data can be stored “in the cloud,” aggregated with data from many operations, and used for analysis to improve on-farm management as well as for policy analysis as discussed in the previous section.Due to these technologies, some producers now have access to their past crop yield and related management data at the field or sub-field resolution.This information can be combined with satellite imagery, high-resolution spectral and thermal data obtained from UAVs, and weather data.These data provide the foundation for highly sophisticated, site-specific management – i.e., “precision agriculture” – that has the potential to substantially improve economic and environmental efficiency of management decisions and also provide the kind of information needed to meet both private and public demands for sustainable agricultural production.However, to achieve these efficiency improvements, the capability to effectively capture and analyze these data is needed.For example, despite these advances in data acquisition by equipment sensors, variable rate application of nutrients and other agricultural chemicals continues to be based on simple rule-of-thumb or empirical approaches, and not by using model-based systems approaches that account for the interaction of soils, weather and related management decisions.In addition to the farm and landscape scale analyses discussed thus far, there will also be a growing demand for farm-level information to be integrated with other components of the agricultural value chain, to meet both policy requirements and consumer demands for more sustainably produced food products.Most agricultural technology impact assessment is carried out after technologies have been disseminated.However, there is a growing recognition of the need for forward-looking, or ex ante, technology impact assessment designed to anticipate both intended and unintended impacts.One of the most important growing applications of ex ante impact assessment is for climate adaptation and climate smart agriculture.There is a widely recognized need to not only assess climate impacts on agricultural systems, but also to develop adaptation strategies and provide information to support farmer decision making for climate adaptation.There are two key elements of this type of analysis.First, the research team must project the future climate and socio-economic conditions in which the farm decision maker will be operating.New multi-disciplinary and participatory methods to create future scenarios for this type of analysis have recently been developed.Second, researchers need to obtain information about the potential adaptations and the ways that farm decision makers would implement them."Farm-level decision support tools linked to a web-based system could be used to obtain reliable information about famers' current management practices, and also could be used to obtain their evaluations of management alternatives under conditions defined by future changes in climate, economic conditions and policies.Fig. 1 provides an overview of the features of farm-level data and decision tools, landscape-scale data and analytical tools that support science-based policy, and their interrelationships.While farm-level decision making and landscape-scale analysis have different purposes, they both depend on both private data as well as public data.A key question for the design of the agricultural knowledge infrastructure is how both types of data can be collected, managed and utilized efficiently and securely.Farm-level data and decision tools are evolving rapidly along with innovations in computer power, software, mobile information technologies and technologies for site-specific management.The left-hand side of Fig. 1 presents the generic structure of these tools, the data they use as inputs, and the outputs that are generated.Various decision tools and software are now in use which collect detailed information and generate outcomes that are useful for farm-level management decisions.This information and data can be used to monitor the economic and environmental performance of a farm operation over time and space.The right hand side of Fig. 1 shows the general structure of the data and models needed to carry out landscape-scale research and policy tradeoff analysis.There are three broad categories of regional data: publicly available biophysical data, including down-scaled climate and soils data; publicly available economic data, including prices and policy information; and the confidential site- and farm-specific data obtained from producer- and industry-generated databases.As with farm-level decision tools, there is a need to more systematically develop and apply methods for the improvement of these models, for example through model inter-comparison studies such as those being undertaken by the Agricultural Model Inter-comparison and Improvement Project.Typically these models require spatially and temporally explicit data that are statistically representative of the farms and landscapes in a geographic region in order to provide reliable information about economic and environmental impacts and tradeoffs.The currently available data are inadequate for various reasons.Many model implementations rely on the publicly available information on land management collected periodically through mailed questionnaires or enumerator interviews, which usually limits the spatial dimension of the models to political units, agro-ecological zones or similar delineations.Consequently, models often must be operated with averaged data that may fail to accurately represent site-specific environmental processes and outcomes.Many data are collected with samples that are not statistically representative of relevant regions or populations for landscape-scale analysis; many data are not spatially or temporally explicit, are only available after substantial aggregation, and are often available with long time lags between when the land management decisions are made, the data are collected, and when they become available for research or policy purposes.For example, the 2012 U.S. agricultural census data were only available in 2014, and then are only available in limited ways for research and policy analysis.Longitudinal data are particularly important for policy research, i.e., representative samples of farms that provide data for the same farms over time.The Living Standards Measurement Survey data being coordinated by the World Bank are being collected longitudinally in some countries now, but due to issues such as long respondent recall and limited statistical representation, these data have a number of substantial limitations.Another critical issue is data quality.Farmers lack incentives to bear the high costs of responding to lengthy questionnaires, and often lack detailed records needed to accurately respond to detailed questions about management inputs, production outputs, and prices paid and received, and various other details often asked in farm surveys.A tool that could be used by farmers to make management decisions, and simultaneously collect that information for research and policy analysis, could overcome these limitations.Landscape-scale policy analysis can be implemented using various spatially-explicit models designed to simulate the adoption and impact of new technologies, changes in policy, and environmental change such as climate change.In this section we briefly describe an economic impact assessment model called TOA-MD.TOA-MD provides a framework in which bio-physical and economic data can be integrated for technology impact assessment and policy analysis at the landscape scale.The TOA-MD model is a parsimonious, generic model for analysis of technology adoption and impact assessment, and ecosystem services analysis.Further details on the conceptual and statistical foundations of the model are provided in Antle and Antle et al.The model software and the data used in various studies are available to researchers with documentation and self-guided learning modules at http://tradeoffs.oregonstate.edu.Various types of data can be used to implement an analysis using TOA-MD and other landscape-scale policy analysis models.The data can include farm production data, simulated outputs of bio-physical models, price projections from global or national market models, and data from alternative policy or climate scenarios, depending on the type of analysis.Estimation of parameters for TOA-MD and other spatially-explicit impact assessment models requires data from a statistically representative sample of the farm population, as discussed in Antle and Capalbo for econometric models, and in Troost and Berger for models based on mathematical programming.The TOA-MD model was designed to simulate technology adoption and impact in a population of heterogeneous farms.The TOA-MD model uses the standard economic model that is the foundation of the econometric policy evaluation literature.The analysis is applied to farm decision makers who choose between the production system they are currently using and an alternative production.Each decision maker is assumed to choose the system with the highest expected return.Thus, in the population, the proportion of farmers using system 2 is determined by the distribution of the difference in expected economic returns between the two systems.Other impacts are estimated based on the statistical relationship between those variables and expected economic returns to the alternative systems.The outputs of the TOA-MD model include the predicted adoption rate of the alternative system, the average impacts on adopters, and the average impacts on the entire population of farm households.The model can also generate indicators showing the percent of households experiencing an outcome above or below a defined threshold.An example of a threshold indicator is a poverty rate showing the percent of households with incomes below a poverty line.The analysis of technology adoption and its impacts depends critically on how the effects of the new technology interact with bio-physical and economic conditions faced by farm decision makers."A key element in the TOA-MD analysis is reliable estimates of the effect of the new technology on the farming system's productivity and profitability.This information can come from various sources, including from formal crop and livestock simulation models, from experimental or observational data, or from expert judgment.Two types of tradeoff analysis that can be carried out with TOA-MD are described by Antle et al. as adoption-based tradeoffs and price-based tradeoffs.Adoption-based tradeoffs occur when the adoption rate of a technology changes in response to an economic incentive or other factor affecting technology adoption.An important example of an adoption-based tradeoff is a policy to provide payments to farmers if they change management practices to increase the provision of ecosystem services such as soil carbon sequestration.Price-based tradeoffs occur when changes in the prices of the outputs or inputs used by farmers induce them to make changes in their land management decisions that in turn induce changes in the economic, environmental or social outcomes associated with the farming system.The analysis of Camelina sativa presented below is an example of a price-based tradeoff.AgBiz Logic is an analytical tool that integrates data, scenarios, economic and financial calculators and climate and environmental modules.It generates estimates of economic and environmental outcomes for current and alternative management practices.A key feature of AgBiz Logic that distinguishes it from many other farm management tools is that it is designed to analyze current and prospective management scenarios.This feature makes it a potentially useful tool to acquire data for a landscape-scale analytical tool such as TOA-MD.The AgBiz Logic software suite consists of the following economic and financial modules:AgBizProfit: capital investment tool that evaluates an array of short-, medium-, and long-term investments.The module uses the economic concepts of net present value, annual equivalence, and internal rate of return to analyze the potential profitability of a given investment.AgBizLease: a module to establish alternative short- and long-run crop, livestock and other capital investment leases.The module uses the economic concepts of net present value to analyze crop sharing or rental agreements under these alternatives.AgBizFinance: a module for making investment decisions based on financial liquidity, solvency, profitability, and efficiency of the farm or ranch business.After an AgBizFinance analysis has been created, investments in technology, conservation practices, value-added processes, or changes to cropping systems or livestock enterprises can be added to or deleted from the current farm and ranch operation.Changes in financial ratios and performance measures are also calculated.AgBizClimate: a module that translates information about climate change to farmers and land managers that can be incorporated into projections about future net returns.By using data unique to their specific farming operations and locations, growers can design management pathways that best fit their operations and increase net returns under alternative climate scenarios.AgBizEnvironment: a module that uses environmental models and other ecological accounting to quantify changes in environmental outcomes such as erosion, soil loss, soil carbon sequestration and GHG emissions associated with input levels and management practices.AgBiz Logic operates on the premise that growers want to maximize net returns over time, taking into account investment costs, operating expenses and revenues for crop and livestock products.This decision support tool has been used to quantify farm-scale tradeoffs associated with changes in climatic conditions.Capalbo et al. 2017 present an illustrative analysis of how climate change may impact dry-land wheat producing farmers in the U.S. Pacific Northwest."Projected changes in climate are translated into changes in key climate factors affecting the grower's yields via the AgBizClimate; these yield changes are transformed into net returns.These yield changes are the impetus for producer-generated adjustments in input use, management, and technology adoption.Decision tools and modules such as AgBiz Logic; provide essential analytical output for efforts labeled climate-smart agriculture which focuses on making farms and farmers more resilient to a changing climate."These decision support tools are at the very heart of the recommendations called for in the recent U.S. Government Accountability Office report 14–755, which speaks to USDA's ongoing efforts to better communicate information to growers in a timely downscaled manner.One of the greatest challenges in implementing policy analysis of alternative agricultural systems, such as adaptation to climate change, or responses to new policies or technologies, is the design of plausible “counter-factual” systems.AgBiz Logic is designed to be a farm-level scenario analysis tool, where the scenarios can involve any type of alternative management.This feature makes AgBiz Logic uniquely suited to serve as a data generation tool for policy analysis using a model like TOA-MD that requires data for current as well as prospective or future systems.AgBiz Logic provides a systematic framework in which farm decision makers can record their best estimates of the cost and productivity effects of a new system on their own farms.If this information could be acquired from a suitable sample of farms, it could then be used by analysts to estimate parameters of the TOA-MD model for landscape-scale policy analysis.The conventional way to obtain the farm production data is to conduct a survey, such as the surveys done periodically by government agencies.There are various limitations to farm survey data.One is that the data are often collected periodically, e.g., the U.S. agricultural census is carried out on five-year intervals, and then only made available to researchers with a substantial delay.Another major limitation is that the data often lack sufficient detail, particularly for management decisions such as fertilizer and chemical use, machinery use, and agricultural labor.A third limitation is that these surveys can be extremely expensive both for respondents and for organizations collecting the data.A tool like AgBiz Logic could be utilized to provide higher quality, more timely data at lower cost.As portrayed in Fig. 1, a data system that linked farm management software to a confidential database could provide near real-time data on management decisions, and do so for a statistically representative “panel” of farm decision makers over time.Moreover, the level of detailed management data utilized by AgBiz Logic would provide the needed level of detail for implementation of analysis using a tool such as TOA-MD.Also, users of AgBiz Logic would have every incentive to enter accurate information because they would be using this information to make their actual management decisions.Finally, a tool like AgBiz Logic provides a user-friendly, efficient way for farmers to enter data, thus substantially reducing the cost of data collection.In this section we illustrate the use of TOA-MD to evaluate Camelina sativa for its potential use as a crop that could produce biodiesel fuel for aviation and other uses, particularly in regions where dryland cropping systems are currently dominant.Our goal is to illustrate the type of analysis that could be implemented using data that could be generated by a tool like AgBiz Logic."The policy question addressed in this example is whether it would be economically feasible to incorporate Camelina into the dryland wheat system currently in use in the U.S. Pacific Northwest as part of the U.S. Department of Energy's “Farm to Fly” initiative.Key issues for this initiative are the profitability of Camelina for farmers at prices competitive with fossil fuels, whether it would be possible to provide sufficient quantities to meet the goals of the private airline industry and the U.S. military, and what impacts biofuels would have on food production and prices.Table 1 summarizes the farm level revenue and cost data used in this example."These data were obtained from farmers' responses to the 2007 Agricultural Census, but similar data could have been obtained using AgBiz Logic from a statistically representative sample of farms.We implement the analysis using agricultural census data to illustrate the analysis that could be done using similar data obtained from AgBiz Logic.Wheat is produced in the PNW in various rotations with fallow and with other crops.This analysis involves incorporating Camelina into the winter wheat-fallow system practices in low-rainfall areas.The WWF system has winter wheat planted in the fall and harvested in mid-summer of the following year, with no crop planted the next season, to restore soil moisture.Thus, a crop is typically planted and harvested on only half of the available land each year.The alternative system we analyze here, denoted WWC, involves replacing the fallowed land with Camelina so that half of the land is planted to winter wheat each year and half is planted to Camelina in a rotation.Experimental data from the region show that this rotation would likely result in a reduction in winter wheat yields from the regional average of about 50 bu/ac to about 33 bu/ac on average, with an average Camelina yield of 1400 lb per acre.The next step for the TOA-MD analysis is to construct similar data for the alternative WWC system.As we noted in the previous section, if AgBiz Logic were used to generate data for this alternative system, participating farmers would be provided available information about Camelina such as experimental yields, and the farmers would provide estimates of the yields they would expect to obtain, along with estimates of costs of production for the practices they would implement.To represent the data that could be obtained from AgBiz Logic, we use data obtained from enterprise budgets constructed by farmers and extension economists, and experimental yield data for Camelina cited above.Based on experimental data showing that elimination of fallow would reduce wheat yields 35%, the revenue per acre for winter wheat is decreased accordingly, and the cost of production is reduced because fallow costs are eliminated.In place of the fallow, the Camelina crop is assumed to yield 1400 lb per acre, and there are similar cost components as noted above for winter wheat.The result is a net return that varies with the price assumed for Camelina as shown in Table 2.A low Camelina price of $0.10/lb would provide a net return to the WWC system similar to the WWF system.Recent market prices for oilseeds similar to Camelina have been in the range of $0.15/lb.For analysis of the adoption of a new system using the TOA-MD model, we need estimates of average returns which we interpret as the data from the enterprise budgets described above, and we also need an estimate of the variance of economic returns in the farm population.The use of AgBiz Logic to collect data for this scenario from farmers would provide an estimate of this variance.Lacking these data, we assume that the coefficient of variation of Camelina returns in the population is similar to the coefficient of variation of returns to winter wheat, and combine that estimate with the estimate of average returns to calculate a variance.The TOA-MD model also requires a value for the correlation between the returns to the WWC and WWF systems.This parameter also could be estimated from data generated by farmers using AgBiz Logic to evaluate the WWC system.Lacking these data, we set the value to 0.75, a typical value for this parameter when it can be estimated with observational data.Table 2 summarizes the average net returns for the WWF and WWC systems, for small and large farm groups, predicted adoption rates and economic impact obtained from the TOA-MD model for Camelina prices ranging from a low value of $0.10 to a high value of $0.30 per pound.Prices in the range of $0.10 to $0.15 per pound result in relatively low adoption rates of 20 to 60%, whereas at prices above $0.20/lb adoption would increase to 80–95%, depending on farm size.Thus, the analysis shows that adoption of the WWC system would increase substantially if prices were in this higher range.It is difficult to know what the market price of Camelina would be if a biofuel market were developed, but this analysis shows that a price substantially above the recent oilseed market price would be required to induce a high rate of adoption of the WWC system.The analysis also shows somewhat higher adoption rates for larger farms.Examination of the WWF data shows larger farms earn a larger proportion of their income from wheat, and thus benefit relatively more when Camelina becomes profitable at high prices, compared to smaller farms that earn somewhat more of their income from non-wheat crops and government subsidies.The economic impacts of the WWC system are represented in two ways in Table 2.The middle column of the table presents the “average treatment effect” which is the average impact of the WWC system relative to the WWF system if it were adopted by all farms.However, adoption is not random for farmers who choose the system with the highest expected economic returns.Under this behavior, the economic impact on the adopters is measured by the average treatment effect on the treated, the last column of Table 2.The ATT is equal to the difference between the returns from the WWC system for the adopters and the returns the adopters would receive from the WWF system if they did not adopt.To summarize, using the results in Table 2, two important implications for the economic impacts of the WWC system are: the average return of this system is negative for low Camelina prices, meaning that the WWF system would provide higher returns than the WWC on average in the farm population.It follows that the adoption rate will be less than 50% as Table 2 shows for the relatively low Camelina price of $0.10; and for those who do adopt WWC, the return is necessarily positive and increases with the Camelina price, as indicated by the average treatment effect on the treated, or ATT.Fig. 4 presents the implications of the TOA-MD analysis for Camelina supply.By running the simulations for a range of prices, we estimate the willingness of farmers to switch from the WWF to the WWC systems and thus increase Camelina production by replacing fallow acres with Camelina.The figure shows a relatively elastic response to the Camelina price in the $0.10 to $0.225 range.The figure also shows how the supply would be affected by lower or higher wheat prices, with higher wheat prices discouraging WWC adoption.Under our assumption that wheat yields would be reduced, WWC adoption would also result in lower wheat production, reducing it by up to 18% at a high Camelina price and a low wheat price.This finding shows that there would be a tradeoff between biofuel production and food grain production.This tradeoff is of interest to policy decision makers, such as the United States Navy, who are concerned about the effect that biofuels could have on food prices.Coupling this analysis to a market equilibrium analysis would provide further information about the possible price and economic impacts of a policy supporting biofuels, e.g., as in Reimer and Zheng.In this paper we describe two analytical tools – AgBiz Logic and TOA-MD – that demonstrate the current capability of farm-level and landscape-scale models to meet the needs for improved data, models and knowledge products.In their present form, these models provide substantial capability to address the data challenges identified by Jones et al., Antle et al., and Janssen et al.However, there are also needs for these and similar models to be improved.First, models need to be more useful to farm decision makers.As Antle et al. observe, users do not want models per se, rather they want the information they can produce.This means that models must be embedded in decision support tools that have value to farm managers.One improvement could be to automate data collection using sensors on machinery and other mobile devices, as well as from web-based sources such as weather, and economic data such as prices.Another area of improvement is inter-operability of tools like AgBiz Logic with farm accounting and tax preparation software, so that information can be entered once and then utilized in an integrated way across multiple analytical tools.Another area for improvement is inter-operability with other models or model output databases, such as crop simulation models and environmental impact models.Similar ease-of-use and inter-operability issues apply to analytical tools for landscape-scale analysis like TOA-MD and other simulation models, such as crop or environmental process models that may be used with it.Data from tools like AgBiz Logic needs to be integrated with cloud-based systems and with the other public data needed to implement a landscape scale analysis identified in Fig. 3.The current approach of manually carrying out this integration on a case-by-case basis makes this type of analysis costly even in a small geographic region, and often makes integration infeasible across larger regions.Second, the fact that virtually all stakeholders want access to model outputs, rather than access to the models themselves, means that there is a demand for “knowledge products,” i.e., tools that facilitate access to model outputs and provide analytical capability to interpret model outputs for decision making.As Janssen et al. observe, it remains to be seen what form these knowledge products will take – as “apps” on mobile devices, as is now being done for some types of decision making such as pesticide spray decisions – or as larger tablet computer dashboards for data visualization and additional processing through meta-models and other analytical tools.The fact that these knowledge products have been slow to materialize suggests some form of “market failure” – i.e., some constraints that prevent this latent demand from being expressed and satisfied.A perusal of the rapidly emerging market for private advisory services utilizing “big data” shows that, at least in some parts of the world such as the United States where large-scale commercial agriculture dominates, this latent demand is beginning to be met by private industry.Yet it remains to be seen how this emerging private supply of information services will operate, and whether it can also satisfy the public good uses of such data.It is even less clear how these technologies can serve the needs of the small-scale farmers in the developing world where commercialization lags.As suggested by Antle et al., one solution to these challenges appears to be private-public partnerships among the various organizations that have a mutual interest in assuring that the data are obtained efficiently and used appropriately for both private and public purposes.Such partnerships could help create a pre-competitive space for the development of data and analytical tools that is built on the recognition that there are important public-good attributes of the data, methods and analytical tools, linked to a competitive space to incentivize the commercial development of improved knowledge products.Several challenges need to be addressed to facilitate a linkage between farm-level management tools such as AgBiz Logic and policy analysis tools like TOA-MD and other landscape scale models.First, a statistically representative group of farms would need to be identified who would agree to use AgBiz Logic and allow their data to be used in a landscape scale analysis.This would involve a sampling process similar to identifying a sample of farms for a farm-level economic survey.Second, software would need to be designed to transmit and assemble the individual farm data into a database that could subsequently be used to estimate TOA-MD parameters while maintaining confidentiality of individual producers.Note that data would need to be collected over multiple growing seasons in most cases to account for crop rotations and other dynamic aspects of the farming system.Farm household characteristic data could be collected as a part of AgBiz Logic, or could be collected using a separate survey instrument.Environmental and social outcome data collection would need to be tailored to the specific type of variable.For example, measurement of soil organic matter could require infield soil sampling and laboratory analysis, possibly combined with modeling, or the use of specialized sensors.For scenario analysis, it is necessary to project from current biophysical and socioeconomic conditions into the alternative conditions described by a scenario.For climate impact assessment, this is currently being done on a global scale using new scenario concepts called “Representative Concentration Pathways” and “Shared Socio-Economic Pathways.,To translate these future pathways into ones with more detail needed for agricultural assessments, “Representative Agricultural Pathways” are being developed.The data acquired through tools such as AgBiz Logic could be combined with these future projections to implement regional integrated assessments using the methods developed by the Agricultural Model Inter-comparison and Improvement Project.
Research on next generation agricultural systems models shows that the most important current limitation is data, both for on-farm decision support and for research investment and policy decision making. One of the greatest data challenges is to obtain reliable data on farm management decision making, both for current conditions and under scenarios of changed bio-physical and socio-economic conditions. This paper presents a framework for the use of farm-level and landscape-scale models and data to provide analysis that could be used in NextGen knowledge products, such as mobile applications or personal computer data analysis and visualization software. We describe two analytical tools - AgBiz Logic and TOA-MD - that demonstrate the current capability of farmlevel and landscape-scale models. The use of these tools is explored with a case study of an oilseed crop, Camelina sativa, which could be used to produce jet aviation fuel. We conclude with a discussion of innovations needed to facilitate the use of farm and policy-level models to generate data and analysis for improved knowledge products.
459
Photosynthetic performance of soybean plants to water deficit under high and low light intensity
As a result of the increasing human population, the demand for food has been growing steadily over the last century.Meanwhile, food producers are experiencing greater competition for land, water and energy, while balancing the negative effects of food production.Multiple cropping systems using crop rotations or intercropping can maximize resource use, and produce greater yield on a given piece of land.Some of the benefits of intercropping are increase in yield, improved efficiency different environmental resources, pest and disease suppression and biological nitrogen fixation.As a result, multiple cropping systems such as the legume/non-legume intercropping system, grain multiple cropping and wheat-corn/soybean relay strip intercropping system, are becoming popular in China.In the multiple cropping systems, plants are typically exposed to several stressors simultaneously.For example, soybean crops grown along maize in a relay strip intercropping system can experience limited light intensity from the shade of maize, and limited water availability.Physiological responses of evergreen and deciduous tree leaves to various sunlight-drought scenarios have shown that shading could ameliorate, or at least not aggravate, the impact of drought.This is because the performance of leaves under drought stress depends on how much light the leaves receive.Shade by the tree canopy has indirect effects, such as reducing leaf and air temperatures.Shade can also reduce the understory temperatures, and affect vapor pressure deficits and oxidative stress to alleviate the impact of drought on plants and seedlings in the understory.Shading conditions can allow olive trees to maintain high photosynthetic activity at low values of stomatal conductance.In contrast, exposed plants can experience reductions in photosynthetic efficiency and intrinsic water efficiency due to difference in the activity of non-stomatal components of photosynthesis.Additionally, the decrease in photosynthetic activity and the increase in photoinhibition during drought are more marked in exposed plants than in shaded plants.In the soybean plant, short-term shading can reduce photosynthesis, leaf temperature, stomatal conductance, transpiration and water use efficiency and increase intercellular CO2 partial pressure, which leads to carbon gain and water loss.Photosynthetic rate, stomatal conductance and transpiration rate of soybean plants significantly decline under water stress, while the intercellular CO2 concentration changes only slightly at the initiation of the stress treatment.Excessive energy in LHC, reaction center of PSII or PSI can cause pigment bleaching in sun leaves, the excessive energy can induce photoinhibition, thereby damaging pigments through oxidative stress.Shade reduces the chloroplast coupling factor and shifts light-harvesting capacity in soybean plants.The low level of Chl contents in grapevine leaves at high photosynthetic photon flux density largely results from the decay of Chl that is likely enhanced by chlorophyllase activity.However, less is known about whether differences in light intensity can influence the impact of drought on photosynthetic performance of the soybean plant.To better understand this, we investigated the impact of temporary shade and water shortage on the photosynthetic performance of soybean plants.We designed our experiments to determine photosynthetic and chlorophyll fluorescence characteristics as affected by drought, low light intensity stresses and their combination; and elucidate the relationships between them.Soybean cultivar Gongxuan No. 1, a major component of southwestern indeterminate soybean cultivars was tested in the experiments performed in 2011 and 2012.Each seed was weighed individually and sown in cylindrical pots of 14-L volume.The pots contained 13 kg soil composed of 50% sand, 47.5% clay and 2.5% organic matter.The soil was mixed with fertilizer consisting of N = 0.355 g, P2O5 = 0.556 g and K2O = 0.406 g. Fertilizers were applied after emergence, with 3 g single super phosphate, 1 g potassium sulfate and 1.5 g of urea per pot.The experiment was carried out in a glasshouse of the Sichuan Agricultural University, and the greenhouse had an upper ceiling automatic closure system that was utilized when it rained.Soybean plants were subjected to two light intensity levels: high light intensity treatment, where the soybean plants received normal light intensity from the sun when it was sunny, with additional light intensity inside the glasshouse when it was rainy; low light intensity treatment, where the soybean plants were covered by a shade cloth or were under the shade of corn.These experimental light intensity treatments were used to simulate field conditions in the relay strip intercropping system, distinguishing two types of microhabitats: sole cropping soybean and relay strip intercropping soybean.In the experiment conducted in 2011, the light intensity that penetrated through the shade cloth to the soybean plants was 65%.In 2012, the light intensity that penetrated through the maize canopy to the soybean plants was 80% when the soybean was sown, 65% at the vegetative stage, 72% at the reproductive stage and 70% when the soybean plant was in the reproductive stage and the maize was at maturity.Maize is 2.6 m in height, and the whole growth period is around 109 days.Each of the watering treatments were set up within each shade frame and replicated four times, each by one plant in a single pot.Pots were watered every two days during the first stage of the experiment.Once the soybean seedlings reached V5 stage, two months after sowing, two separate water treatments were applied.Half of the pots were kept continuously moist, and the other half were maintained at moderate drought conditions in 2011.In 2012, half of the pots were not watered, while the other half was kept continuously moist.The 2012, LW treatment simulated a typical climate situation of seasonal drought in Southwestern China, as compared to a continuously moist treatment.During the experiment, we measured soil moisture in volumetric water content along the first 20 cm depth with a TRIME-PICO on a daily basis, in a subsample of five pots under different light intensity and water treatments.We did this because the water content changes were different in pots under LW treatments for the two light intensity treatments.A micro-meteorological machine that included sensors for air temperature, relative humidity and light intensity was used to measure microenvironmental parameters.Readings from each sensor were recorded every 6°min with a Hobo data logger.Two additional data loggers were installed to record air temperature measured with sensors attached to the abaxial side of leaves of four plants in each light intensity treatment.From the data, we could see that the light intensity and air temperature of the LI soybean group were lower than the HI soybean group, while relative humidity was opposite.RLWC of leaves was calculated using the standard formula .FW, HydW and DW stand for the leaf fresh weight, hydrated and dry weights, respectively.The hydrated weight was determined by weighing the leaf after 24 h of immersion in distilled water in a sealed flask at room temperature.Dry weight was determined gravimetrically after drying to steady weight at 70 °C in an oven.Soybean leaves were harvested daily during the V5 stage.Five plants were randomly chosen, and one of the most recently expanded leaves was selected from each plant.The beginning point of the non-hydraulic root signals were determined depending on when there was a significant lowering of leaf stomatal conductance without change in leaf RWC.The hydraulic root signal was judged to begin when there were significant differences for both of the above leaf parameters.Epidermal replicas of leaflets were made by coating the adaxial surfaces with clear fingernail polish.Then, the dried films were peeled and mounted on slides.Images were observed using Nikon eclipse 50i under 40 × magnification.A Nikon Digital Sight DS-U microscope camera controller was used to transfer images to a PC computer.Percentage of the open stomata was determined for each surface by randomly counting open stomata and total stomata numbers for 1 mm2 in 10 different fields.The percentage of open stomata was calculated as the ratio of open stomata to total stomata numbers, which was used to calculate the average.Stomatal aperture length was measured by identifying the widest aperture using the Motic Image Plus 2.0 Digital Microscopy Software.Leaves were sealed in plastic bags and kept on ice or refrigerated until further processed.Photosynthetic pigments were extracted according to the Arnon method.Measurements were taken from the most recently expanded leaf of five randomly chosen plants.In 2011, leaf samples avoiding the veins were used.In 2012, 10 leaf discs avoiding the veins were cut from the centre of each leaf.The developed color was measured at three wavelengths 470, 646 and 663 nm, after leaves were immersed in 10 ml 80% acetone for 24 h until no green color was present in the leaves.The amounts of pigments were calculated according to established equations.The net photosynthesis rate, stomatal conductance and transpiration were measured with a Portable Photosynthesis System.Water use efficiency was calculated as Pn/Tr.The parameters were measured daily, after water stress was applied from 8:00 am to 12:00 am.Five plants were randomly chosen, and one of the most recently expanded leaves was selected from each plant four times.The photosynthetically active radiation, provided by an LED light source, was set to 1200 μmol m-2 s-2.The flow rate of air through the sample chamber was set at 500 μmol-1 s− 1, and the leaf temperature was maintained at 25 ± 0.8 °C by thermoelectric coolers."The CO2 concentration of the chamber was adjusted to 400 μl l− 1 with the system's CO2 injector.Chlorophyll fluorescence was measured by a fluorescence monitoring system on randomly selected leaves of plants at 0:00–4:00 am.Following 30 min of dark adaptation, the minimum chlorophyll fluorescence was determined using a measuring beam of 0.2 μmol m− 2 s− 1 light intensity.A saturating pulse was used to obtain the maximum fluorescence in the dark-adapted state.Maximum quantum yield of PSII was calculated as/Fm.Following from this, an actinic light was applied, subsequently, further saturating flashes were applied at appropriate intervals to measure the Fm′.Ft is the steady-state fluorescence in the light-adapted state.Three seconds after the removal of actinic light, Fo′ was measured using a far-red light of 5 W m− 2.Quantum yield of PSII is calculated as ФPSII =/Fm′.Photochemical quenching was calculated as qP =/.Non-photochemical quenching was calculated as NPQ =/Fm′ according to Maxwell and Johnson.Apparent photosynthetic electron transport rate was calculated as ETR = ФPSII × PAR × 0.5 × 0.84.Transport of one electron requires absorption of two quanta, as two photosystems are involved.It is assumed that 84% of the incident quanta are absorbed by the leaf.The leaf area was measured using a scanner with a leaf area calculation program.Following this measurement, all leaves were pooled and dried at 70 °C to constant mass before weighing.The specific leaf area was calculated as the ratio of the leaf area to leaf dry weight.Leaf area of each plant was calculated using the standard formula SLA × dry weight of the corresponding plant.Samples were harvested 14 days after drought stress in 2011, and nine days after receiving no water in 2012.The experiments were organized as a factorial design, in which light intensity treatments were the main-plot factors, and water treatments were the subplot factors."Results were analyzed by two-way analysis of variance, and means were compared by Duncan's multiple range tests at P < 0.05 or P < 0.01.All data were organized in Excel spreadsheets, and processed by the software Statistical Package for the Social Sciences version 11.5.We first measured the percentage of open stomata and stomatal aperture length to understand the effects of water deficit and shade on stomata of the soybean plant.Water deficiency resulted in a 4.33% decrease in open stomata on the upper leaf surface under high light intensity, and a 6.27% decrease under low light intensity.We observed significant reductions in stomata size in the drought-stressed plants, but stomatal aperture length was higher under shade treatment when compared to high light intensity treatment.The pigment contents of soybean plants for different light intensities and water treatments are summarized in Fig. 2.Chl a and Car content, ratios of Chl a/b and Car/Chl were significantly reduced under low water conditions compared to the high water-treated plants.But Chl b and Chl content increased when plants suffered from water deficit.Under LW conditions, Chl a, Chl b, Chl, Car and ratio of Car/Chl were higher in the LI group compared to the HI group.Conversely, the ratio of Chl a/b was lower under LW treatment in LI compared to HI.The reduction in the availability of light intensity and water resulted in structural changes to the soybean leaves, and affected their photosynthetic performance.Reduction in the availability of water resulted in a decrease in the light saturation point.Under LI treatment, the LW-induced reduction of LSP was alleviated.We observed a reduction in photosynthetic rate under water stress.The reduction of Pn by water stress was 98.77% in the HI group and 96.55% in the LI group.Stomatal conductance, transpiration rate and water use efficiency were also reduced.The decrease in Gs was 98.79% in the HI group and 88.81% in the LI group.Tr was reduced by 97.84% in the HI group and 92.81% in the LI group.When plants were under water stress, intercellular CO2 concentration increased by 53.64% for HI treatment and 209.7% for LI treatment.WUE was reduced by 43.41% in the HI group and 51.63% in the LI group.Under LI treatment, the reduction in Pn, Gs, Tr by LW were alleviated compared to HI treatment.The drought tolerance of soybean plants was evaluated by treating plants to seven days of drought stress, and then analyzing several fluorescence parameters determined under dark-adapted and steady state conditions.In control leaves, maximum quantum yield of PSII was approximately 0.78–0.80.This parameter decreased in response to drought stress in all leaves, but was not significantly different.Additionally, drought stress resulted in a reduction in quantum yield of PSII in all leaves.The reduced ΦPSII was a result of a decrease in the excitation energy trapping efficiency of PSII reaction centers.A significant decrease in qP was also observed in all drought stressed plants, indicating that there was a change in the balance between the excitation rate and the electron transfer rate.This change may have led to a reduced state of the PSII reaction centers.We also observed an increase in NPQ under drought conditions, which reflect the non-photochemical energy dissipation in all plants, and the increase in NPQ levels were significant compared to controls.Additionally, NPQ levels in drought conditions were significantly higher in shaded soybean plants.RLWC decreased with reduced soil water content, but was higher in shaded soybean leaves compared with exposed soybean leaves.Special leaf area increased under drought and low-light intensity conditions.Leaf area per plant was significantly reduced under drought stress.The leaf area per plant was highest under the HW-LI treatment.To assess whether the difference in photosynthetic parameters among soil moisture gradients of exposed and shaded plants was associated with variations in other parameters, we conducted a regression analysis between soil moisture and RWC, Pn and Gs, Tr and Gs, etc.Regression analysis of corresponding values among eight different treatments showed that there was a quadratic line function between RLWC and RSWC .This revealed that when RSWC decreased, RLWC reduced to the threshold.Pn and Tr were quadratically correlated with Gs .This revealed that the decrease in Gs caused the reduction in Pn and Tr to some extent.RLWC was the reason for changes in the value of Gs.ΦPSII, Fv/Fm and chl were significantly correlated with Pn .Light intensity and water treatments had significant influences on leaf area per plant, Chl b, Chl, Chl a/b ratio, Pn, Gs, WUE, ΦPSII, qP and NPQ.The P values of the parameters that were listed here reached 0.01 or 0.05.We found significant interactions between light intensity and water treatments on leaf area, Chl b, Chl, Pn, Gs, Tr, WUE and qP, with the P values lower than 0.01 or 0.05.Previous studies have shown that leaves subjected to drought exhibit large reductions in RLWC and water potential.In our experimental drought conditions, the water availability of the soil was lower, and the exposed high light intensity plants experienced lower relative humidity and higher temperatures.The lower RLWC detected in the exposed water-stressed plants reflect the fact that both high light intensity and the soil water deficiency resulted in the dehydration of plant tissue.The improvement in RLWC values in the shaded plants may be attributed to the higher relative humidity and lower temperature in the environment.Shading may also decrease water loss and improve water uptake by improved root growth and root hydraulic conductance.Stomata are sensitive to RLWC, and tend to close with decreasing RLWC, which can result in lower Gs levels in exposed plants.This decrease in Gs may be caused by the reduced open stomata ratio and stomatal aperture size in exposed water-stressed plants.Stomatal closure primarily causes a decline in the photosynthesis rate."The variation in Ci can be used as a standard to estimate the reasons for decreased Pn, and whether decreases in Gs or reductions of mesophyll can result in changes in the cell's photosynthetic capacity.In this study, we observed a decrease in Gs and increase in Ci in the water-stressed plants.These results suggest that the strong decrease in Pn in water-stressed plants may be caused by the closure of stomata, and reduction in the photosynthetic capacity of mesophyll cells, which in turn results in increased Ci.Chlorophyll is the photosynthetic pigment of plants."Chlorophyll content can serve as a measure for the plant's ability to use light, as chlorophyll plays a central role in the absorption and transmission of light quantum.Absorption spectra of Chl a and Chl b are similar, but the absorption peaks in the red light is higher in Chl a, and the absorption peaks in blue light are higher in Chl b.Therefore, the relative increase of Chl b enables plants to improve efficiency of blue-violet light absorption, and adapt to a shaded environment.Studies show that chlorophyll content of shade-tolerant plants increases under shade.In this study, water-stressed plants had lower Chl a, Chl b and Car content, and the ratios of Chl a/b, Car/Chl were also lower.Shaded plants maintained a higher Chl a, Chl b and Car content, lower ratios of Chl a/b, Car/Chl when under drought stress.The results are in agreement with Cicek and Çakırlar, who reported that salt stress affects the Chl a/b ratio in several soybean cultivars.Some of the cultivars seemed to adapt to the salt stress by reducing their Chl a/b ratio, which suggests that those cultivars may have a larger antenna size.In conclusion, shaded soybean plants show enhanced ability to capture and use light by increasing the chlorophyll content and reducing the impact of primary reactions.Zlatev and Yordanov reported that drought stress induced an increase in Fo and a decrease in Fm, and an associated increase in NPQ in bean plants.In our study, drought-stressed soybean plants showed increased NPQ within shaded and exposed plants, but shaded soybean plants had lower NPQ compared to exposed soybean plants.The lower NPQ levels may cause an increase in the probability of heat emission, lowering the trapping efficiency of open reaction centers for shaded soybean plants.Quantum yield of PSII is the product of the efficiency of the open reaction centers and the photochemical quenching.In all plants under drought treatment, we observed a decrease in qP, indicating that a larger percentage of the PSII reaction centers was closed at any time.This in turn indicates a change in the balance between excitation rate and electron transfer rate.In our study, shaded soybean plants had higher qP, ΦPSII and ETR, as compared to the exposed soybean plants when under drought stress.This could be because drought did not have a serious effect on the shaded soybean plants compared to exposed soybean plants.Light intensity is known to be the main factor that promotes shifts in SLA.Water stress has also been reported as an environmental factor that may increase SLW.The increase in SLA has been interpreted as a mechanism to optimize light harvesting under low light intensity conditions.However, the higher SLA in shaded seedlings would result in reduced efficiency of controlling water losses under drought conditions.In our study, the highest increase of SLA was observed in shaded seedlings.This implies that there may be a higher leaf area per unit for light harvesting, and better drought SLW elasticity under limited light conditions.In conclusion, the photosynthetic performance of soybean plants was severely reduced in drought conditions, but shading alleviated the drought impact.Results of this study suggest that shaded soybean plants have enhanced drought tolerance due to increased Gs, Tr, pigment content, qP, ΦPSII, ETR and decreased Chl a/b ratio to maintain a higher Pn.Taking into account the economic importance of soybean, the study is of potential importance in an applied context, especially in Southwest China where growth conditions applied in the present study are typical for soybean cultures.
The two major challenges to relay strip intercropping soybean production in Southwest China are drought and low light intensity. This study tests whether the impact of drought on the photosynthetic performance of soybean plants is different between low and high light intensity conditions. To investigate this, soybean plants were grown in pots in a factorial experiment at two irrigation regimes (75 ± 2% and 45 ± 2% of soil field capacity) and two light intensity treatments (100% and 65% light intensity) in 2011. In 2012, soybean plants were grown in two irrigation regimes (75 ± 2% of soil field capacity vs. progressive soil drying) and two light intensity treatments (sole cropping soybean and relay strip intercropping soybean). Photosynthetic performance was assessed by measuring parameters such as net photosynthetic rate (Pn), stomatal conductance (Gs), water use efficiency (WUE), which were decreased significantly in drought stressed plants. We also observed differences in the photosynthetic responses of soybean plants to drought depending on the light intensity treatment the plants were subjected to. Shaded soybean plants in response to drought conditions had increased chlorophyll a (Chl a), chlorophyll b (Chl b), chlorophyll (Chl), carotenoid (Car), ratio of Car/Chl, leaf relative water content (RLWC), leaf area per plant, specific leaf area (SLA), Pn, Gs, intercellular CO2 concentration (Ci), transpiration rate (Tr), photochemical quenching (qP) and electron transport rate (ETR). The above-mentioned photosynthetic changes may play an important role in determining how shaded soybean plants adjust their photosynthetic rate when experiencing drought conditions.
460
Degradation of organophosphate esters in sewage sludge: Effects of aerobic/anaerobic treatments and bacterial community compositions
Organophosphate esters are widely used as flame retardants and plasticizers in recent years .Because of the potential risks for human health, OPEs are regarded as a class of emerging pollutants .High concentration levels of OPEs were found in the dewatered sewage sludge because of the adsorption on the activated sludge during the wastewater treatment process .Composting is an effective way to realize the sludge recycling and harmless disposal .The matrix in the composts was complex and the spiked recoveries were usually low.Accelerated solvent extraction combined with solid phase extraction method was used for the determination of OPEs in this study.Detail information was provided in our previous work .The concentration of OPEs in collected samples during the whole process was listed in Tables 1–4.The principal components analysis was shown in Fig. 1.Briefly, the extraction procedure was performed on a Dionex ASE 350 system.Small amount of diatomaceous earth and 0.5 g of sample were loaded into a 33 mL capacity stainless steel cell.Each sample was spiked with 10 μL of TnBP-d27 at 5 mg L−1 as surrogate before extraction.Additional diatomaceous earth was added to fill the remaining free space of the cell.Two pieces of cellulose filter were placed on the bottom and top of the extraction cell, respectively.After ASE procedure, the extract was evaporated to almost dryness by using a rotary evaporator.The extract was re-dissolved in 6 mL of ACN and diluted to 200 mL with ultrapure water.The solution was filtered by GF/C membrane and then subjected to an Oasis HLB cartridge.The analytes were eluted by 8 mL of acetonitrile and then concentrated to nearly dryness.The residue was redissolved in 1.5 mL ACN/water and 5 μL of the solution was injected into UPLC-MS/MS for analysis.A UPLC system equipped with a triple quadruple mass spectrometer was used for the determination and identification of OPEs.The separation of analytes was performed on a Hypersil GOLD C18.A binary mobile phase of an aqueous solution of 0.1% formic acid and ACN containing 0.1% formic acid at a flow rate of 0.3 mL min−1 was applied.The gradient was set as follows: 0 min, 0.5 min, 3 min, 4.5 min, 8.5 min, 9 min, 13.8 min, 13.9 min, 17 min.For MS/MS analysis, the electrospray ionization was run in the positive ion mode.The optimal conditions were set as follows: peak width resolution 0.7 m/z, spray voltage 4500 V, sheath gas pressure 35 units, auxiliary gas pressure of 20 units, and capillary temperature 300 °C.Field blanks, procedural blanks, spiked blanks, spiked matrix, and replicate samples were analyzed with extraction to control contamination.In each spiked sample, 50- and 100-ng mixture of OPEs were added.All samples were spiked with TnBP-d27 as surrogate.TCEP was not found in the blank; TnBP, TPhP, TCPP, and TBEP were detected at 2.95, 9.38, 3.90, and 2.00 g L−1 in the blank.The recoveries of standards in spiked samples were within 56–113% at two different spiked concentration levels.The matrix effect was evaluated by addition of standards into the pre-extracted samples were in the range of 83–121% at two different spiked concentration levels.Each batch of ten samples included one procedural blank to check potential contamination.All glassware was solvent rinsed and heated overnight at 400 °C before usage.
This dataset provides detail information on the analytical methods of organophosphate esters (OPEs) in sludge samples, including the sample preparation, ultra-high performance liquid chromatography-tandem mass spectrometric (UPLC-MS/MS) analysis, quality assurance and quality control (QA/QC). The concentration of target OPE compounds in collected samples of four individual treatment was provided, including aerobic composting combined with pig manure (T1), aerobic composting without pig manure (T2), anaerobic digestion combined with pig manure (T3), and anaerobic digestion without pig manure (T4). To investigate the variation of bacterial community compositions, principal components analysis (PCA) was provided based on the high-throughput sequencing. These data would be useful for clarifying the removal of OPEs under aerobic and anaerobic conditions. Besides, it also provides important information on the potential bacterial strains responsible for the biodegradation of OPEs in each treatment.
461
Half-sandwich rhodium(III) transfer hydrogenation catalysts: Reduction of NAD+ and pyruvate, and antiproliferative activity
Transfer hydrogenation reactions for the reduction of ketones, imines or CC double bonds have been intensively studied in recent years .Precious metal complexes containing Ru, Rh or Ir are often effective as transfer hydrogenation catalysts, and half-sandwich organometallic complexes in particular have achieved high conversions, turnover frequencies and enantio-selectivities .Ruthenium catalysts are often the most efficient, partly due to their slower rate of ligand exchange compared to Rh or Ir, which, in many cases, provides higher chemo- and enantio-selectivity .Although rhodium catalysts can be more active, this can be accompanied by a loss of chemo- and enantio-selectivity.Nevertheless, catalytic activity and enantio-selectivity depend on the choice of appropriate chiral ligands, on the substrate, the hydride donor, and on the reaction conditions .There is current interest in metal-based complexes capable of catalysing reactions in cells .Metallodrugs with catalytic properties can potentially be administered in smaller doses consequently leading to lower toxicity .Transfer hydrogenation reactions have attracted much attention for the reduction of molecules inside cells .Organo-Ru and -Ir complexes can catalyse hydride transfer reactions under biologically-relevant conditions .For example, + or + where CpX is pentamethylcyclopentadienyl, 1-phenyl-2,3,4,5-tetramethylcyclopentadienyl or 1-biphenyl-2,3,4,5-tetramethylcyclopentadienyl can utilise reduced nicotamide adenine dinucleotide to transfer hydride and reduce biomolecules such as pyruvate .The transfer of hydride to nicotinamide adenine dinucleotide can also be achieved using formate as an hydride source, + and + where arene = p-cymene, benzene, hexamethyl benzene or biphenyl .Our recent work suggests that transfer hydrogenation catalysed by organometallic complexes can be achieved inside cells .For example, the Ir complex +, where py is pyridine, can utilise NADH as a biological hydride donor, to generate an iridium-hydride complex.The hydrido complex is able to transfer hydride to molecular oxygen, increasing the levels of hydrogen peroxide and reactive oxygen species in cancer cells .Also, the Ru complex where TsEn is N--4-toluene sulfonamide reduces the levels of NAD+ in A2780 ovarian cancer cells when co-administered with sodium formate, potentiating the anti-cancer activity of the complex .The aim of the present work was to synthesise a series of Rh Cp* catalysts which can carry out transfer hydrogenation reactions in cells.Half-sandwich complexes of the type n +, where N,N′ = ethylenediamine, 2,2′-bipyridine, 2,2′-dimethylbipyridine and 1,10-phenanthroline were synthesised and fully characterised.The activity of these Rh complexes for the reduction of NAD+ in the presence of formate was compared with their Ru analogues and .We also studied the reduction of pyruvate to lactate via hydride transfer from formate, a process carried out naturally in vivo by the enzyme lactate dehydrogenase and the coenzyme NADH.Finally, the antiproliferative activity of the complexes towards cancer cells in the presence of excess formate was also investigated.Rhodium trichloride hydrate was purchased from Precious Metals Online and used as received.Ethylenendiamine was purchased from Sigma-Aldrich and freshly distilled prior to use.The protonated ligands 3-phenyl-1,2,4,5-tetramethyl-1,3-cyclopentadiene and 3-biphenyl-1,2,4,5-tetramethyl-1,3-cyclopentadiene were synthesised following the methods in the literature .The Rh-arene precursor dimers 2 were prepared following literature methods , as was the ligand N--4-benzenesulfonamide .2,2′-Bipyridine, 4,4′-dimethyl-2,2′-dipyridine, 1,10-phenanthroline, 4-bromo-biphenyl, 4-bromo-biphenyl, 1.6 M n-butyllithium in hexane, phenyllithium in ether, 2,4-pentamethylcyclopentadiene, 2,3,4,5-tetramethyl-2-cyclopentanone were obtained from Sigma-Aldrich.Magnesium sulphate, ammonium hexafluorophosphate, silver nitrate, potassium hydroxide, sodium chloride, hydrochloric acid were obtained from Fisher Scientific.Sodium formate, perchloric acid, β-nicotinamide adenine dinucleotide hydrate, β-nicotinamide adenine dinucleotide reduced disodium salt and sodium pyruvate were purchased from Sigma-Aldrich.DMSO-d6, MeOD-d4, D2O,2CO-d6 and CDCl3 for NMR spectroscopy were purchased from Sigma-Aldrich and Cambridge Isotope Labs Inc.Non-dried solvents used in syntheses were obtained from Fisher Scientific and Prolabo.1H NMR spectra were acquired in 5 mm NMR tubes at 298 K or 310 K on Bruker AV-400 or Bruker AV III 600 spectrometers.Data processing was carried out using XWIN-NMR version 3.6.1H NMR chemical shifts were internally referenced to TMS via 1,4-dioxane in D2O, residual DMSO or CHCl3.1D spectra were recorded using standard pulse sequences.Typically, data were acquired with 16 transients into 32 k data points over a spectral width of 14 ppm and for the kinetic experiments, 32 transient into 32 k data points over a spectral width of 30 ppm using a relaxation delay of 2 s.pH* values were measured at ambient temperature using a minilab IQ125 pH meter equipped with a ISFET silicon chip pH sensor and referenced in KCl gel.The electrode was calibrated with Aldrich buffer solutions of pH 4, 7 and 10.pH* values were adjusted with KOH or HClO4 solutions in D2O.Elemental analyses were performed by Warwick Analytical Service using an Exeter Analytical elemental analyzer.Positive ion electrospray mass spectra were obtained on a Bruker Daltonics Esquire 2000 ion trap mass spectrometer.All samples were prepared in methanol.Data were processed using Data-Analysis version 3.3.Hydrated rhodium trichloride was reacted with 2,4-pentamethylcyclopentadiene dissolved in dry methanol and heated to reflux under a nitrogen atmosphere for 48 h.The dark red precipitate was filtered off and washed with ether to give a dark red powder.The crude product was then recrystallised from methanol.Yield: 410.6 mg.1H NMR: δH 1.627.2 was synthesised following the same procedure described for 2 using hydrated RhCl3 and HCpxPh.The red precipitate was recrystallised from methanol.Yield: 592.7 mg.1H NMR: δH 7.67, 7.43, 1.71, 1.67.2 was synthesised following the procedure described for 2 using hydrated RhCl3 and HCpxPhPh.The red–orange precipitate obtained was recrystallised from methanol.Yield: 613.4 mg.1H NMR: δH 7.75, 7.49, 7.40, 1.73.The Rh dimer was placed in a round-bottom flask to which dry dichloromethane was added.Upon addition of the corresponding ligand,the reaction was stirred overnight at ambient temperature, after which the solvent was removed on a rotary evaporator to afford a crude powder.The crude product was re-dissolved in methanol and filtered.Excess ammonium hexafluorophosphate was then added and the solution stored in the freezer.The resulting product was collected by filtration and recrystallised from acetone or methanol.2, ethylenendiamine.Bright yellow crystals were collected.Yield: 42.3 mg.1H NMR: δH 5.91, 3.05, 2.88, 2.74, 1.90.Anal: Calc for C12H23ClF6N2PRh C: 27.91, H: 4.61, N: 6.21; Found C: 28.11, H: 4.81, N: 5.95.ESI-MS: Calc for C12H23ClN2O2Rh+ 333.0 m/z found 333.0 m/z.2, ethylenendiamine.Recrystallisation from methanol resulted in bright yellow crystals.Yield: 31.7 mg.1H NMR: δH 7.49, 5.59, 3.11, 3.02, 2.76, 2.05, 1.91.Anal: Calc for C17H25ClF6N2PRh C: 37.76, H: 4.66, N: 5.18; Found C: 37.02, H: 4.67, N: 5.26.ESI-MS: Calc for C17H25ClN2Rh+ 395.0 m/z found 395.0 m/z.2, ethylenendiamine.Recrystallisation from methanol resulted in bright yellow crystals.Yield: 47.7 mg 1H NMR: δH 7.69, 7.60, 5.49, 7.41, 5.59, 3.14, 3.03, 2.79, 2.07, 1.97.Anal: Calc for C23H29ClF6N2PRh C: 44.79, H: 4.74, N: 4.54; Found C: 43.98, H: 4.65, N: 4.54.ESI-MS: Calc for C23H29ClN2Rh+ 471.1 m/z, found 471.1 m/z.2, 2,2′-bipyridine Recrystallization from acetone resulted in bright orange crystals.Yield: 126.6 mg.1H NMR: δH 9.14, 8.67, 8.38,7.95, 1.82.Anal: Calc for C20H23ClF6N2PRh C: 41.8, H: 4.03, N: 4.87; Found C: 41.85, H: 3.97, N: 4.84.ESI-MS: Calc for C20H23ClN2Rh+ 430.1 m/z found 430.0 m/z.2.Recrystallization from acetone resulted in bright orange crystals.Yield: 113.5 mg.1H NMR: δH 8.85, 8.70, 8.37, 7.85, 7.76, 7.60, 1.92, 1.85.Anal: Calc for C25H25ClF6N2PRh + acetone C: 48.40, H: 4.50, N: 4.03; Found C: 49.89, H: 4.22, N: 3.93.ESI-MS: Calc for C25H25ClN2Rh+ 490.1 m/z found 490.0 m/z.2 2,2′-bipyridine.Recrystallization from acetone resulted in bright orange crystals.Yield: 87.2 mg 1H NMR: δH 8.91, 8.89, 8.35, 7.87, 7.79, 7.54, 7.45, 1.95, 1.90.Anal: Calc for C31H29ClF6N2PRh C: 52.23, H: 4.10, N: 3.93; Found C: 52.73, H: 4.31, N: 3.70.ESI-MS: Calc for C31H29ClN2Rh+ 567.1 m/z, found 567.0 m/z.2, 1,10-phenanthroline.Recrystallization from acetone resulted in bright orange crystals.Yield: 138.7 mg.1H NMR: δH 9.43, 8.71, 8.29, 8.1, 1.89.Anal: Calc for C22H23ClF6N2PRh C: 44.13, H: 3.87, N: 4.68; Found C: 44.13, H: 3.79, N: 4.62.ESI-MS: Calc for C22H23ClN2Rh+ 453.1 m/z found 453.0 m/z.2, 1,10-phenanthroline.Recrystallization from acetone resulted in bright red crystals.Yield: 130.3 mg.1H NMR: δH 9.08, 8.69, 8.12, 7.78, 7.61, 2.06, 1.85.Anal: Calc for C27H25ClF6N2PRh + MeOH C: 48.54, H: 4.22, N: 4.04; Found C: 47.68, H: 4.09, N: 4.17.ESI-MS: Calc for C27H25ClN2Rh+ 515.1 m/z found 515.0 m/z.2, 1,10-phenanthroline.Yield: 109.8 mg 1H NMR: δH 9.11, 8.69, 8.13, 8.12, 7.86, 7.82, 7.70, 7.53, 7.45, 2.06, 1.88.Anal: Calc for C23H29ClF6N2PRh + MeOH C: 53.11, H: 4.33, N: 3.64; Found C: 54.37, H: 4.34, N: 3.49.ESI-MS: Calc for C23H29ClN2Rh+ 591.1 m/z, found 591.1 m/z.2 and N--4-benzenesulfonamide were dissolved in dichlormethane and triethylamine was added.The reaction was then stirred under nitrogen atmosphere overnight.The solution was placed in a separating funnel and washed with brine, the organic layer separated and dried over MgSO4 and filtered.The solution was concentrated in vacuo and the product recrystallised from methanol to afford an orange powder.Yield: 40.3 mg.1H NMR: δH 8.00, 7.54, 3.19, 2.60, 1.71.Anal: Calc for C19H25ClF3N2O2RhS C: 42.20, H: 4.66, N: 5.18; Found C: 41.92, H: 4.36, N: 5.01.ESI-MS: Calc.for C19H25ClF3N2O2RhS+ 505.0 m/z, found 505.0 m/z.Solutions of complexes 1–10 were prepared and 1H NMR spectra at 310 K were recorded at time 0 and 24 h.The samples were then incubated at 310 K.Aqueous solutions of the chlorido complexes 1–10 were treated with silver nitrate,and stirred overnight at room temperature.1H NMR spectra were recorded after filtration of the samples through celite to remove the silver chloride formed.Solutions of complexes 1–4, 7 and 10 were prepared and treated with 0.95 mol equiv.of silver nitrate.The reaction mixture was then filtered through celite to obtain the corresponding aqua adducts.Changes in the chemical shifts of the methyl protons of the Cpx ligand protons on the aqua adducts with the pH* over a range from 2 to 12 were followed by 1H NMR spectroscopy.Solutions of KOH or HClO4 in D2O were used to adjust the pH*.1H-NMR spectra were recorded at 298 K on a Bruker AV III 600 spectrometer.The data were fitted to the Henderson–Hasselbalch equation using Origin 7.5.Complexes 1–3 and 10 were dissolved in D2O in a glass vial.Complexes 1–9 were prepared in MeOD/D2O in a glass vial and treated with 0.95 mol.equiv.of silver nitrate.Aqueous solutions of sodium formate,and substrate in D2O were also prepared and incubated at 310 K.In a typical experiment, an aliquot of 200 μL of the complex, formate and substrate solutions were added to a 5 mm NMR tube.The pH* of the solution mixture was adjusted to 7.2 ± 0.2 bringing the total volume to 0.635 mL.1H NMR spectra were recorded at 310 K every 162 s until the completion of the reaction.where In is the integral of the signal at n ppm.x = 6.96 or 1.32 ppm.y = 9.33 or 2.36 ppm. 0 is the concentration of NAD+ or pyruvate at the start of the reaction.A set of four experiments was performed to compare the characteristics of the catalytic cycle of Rh complexes 1 and 10 with their Ru analogues.The turnover frequencies for the reduction reaction of NAD+ using complexes 1 or 10 and formate,were determined following the procedure described above.The reaction was studied using different concentrations of NAD+.A second series of experiments using different concentrations of sodium formate and a constant concentration of NAD+,was also performed.The optimum pH range for the catalytic process was studied in a series of experiments.Each experiment was performed at a different pH* over a range from 6 to 10.The pH* of the reaction was adjusted using solutions of KOH or HClO4 in D2O.Transfer hydrogenation reactions using 1,4-NADH as a hydride source in D2O by complexes 1 and 10 were studied by 1H NMR for a period of 10 h.The pH* of the reaction mixture was adjusted to 7.2 ± 0.2, and the experiments performed at 310 K.Complex 1 was dissolved in CD3OD/D2O in a glass vial.Aqueous solutions of sodium formate,NAD+,and pyruvate,in D2O were also prepared and incubated at 310 K.In a typical experiment, an aliquot of 200 μL of complex, 50 μL of formate and 150 μL of pyruvate and NAD+ were added to a 5 mm NMR tube.The pH* of the solution mixture was adjusted to 7.2 ± 0.2 bringing the total volume to 0.635 mL 1H NMR spectra were recorded at 310 K every 162 s until the completion of the reaction.The antiproliferative activities of complexes 1–10 in A2780 ovarian cancer cells were determined.Briefly, 96-well plates were used to seed 5000 cells per well.The plates were pre-incubated in drug-free media at 310 K for 48 h before addition of various concentrations of the compounds.The drug exposure period was 24 h, after which, the supernatant was removed by suction and each well washed with PBS.A further 48 h was allowed for the cells to recover in drug-free medium at 310 K.The sulforhodamine B colorimetric assay was used to determine cell viability .IC50 values, as the concentration which causes 50% growth inhibition, were determined as duplicates of triplicates in two independent sets of experiments and their standard deviations were calculated.The data were analysed using Origin 8.5.IC50 values were obtained from plots of the percentage survival of cells versus the logarithm of the concentration expressed in millimolar units and fitted to a sigmoidal curve.IC50 values for cisplatin were determined in each well-plate as a validation.Cell viability modulation assays were carried out in A2780 ovarian cancer cells.These experiments were performed as described above for IC50 determinations with the following experimental modifications.A fixed concentration of complexes was used, 150 μM.Co-administration of the complex with three different concentrations of sodium formate was studied.Both solutions were added to each well independently, but within 5 min of each other.Cell viability percentages were determined as duplicates of triplicates in two independent sets of experiments and their standard deviations were calculated.Stock solutions of the Rh complexes were freshly prepared for every experiment in 5% DMSO and a mixture 0.9% saline : medium.The stock solution was further diluted using RPMI-1640 to achieve working concentrations.The final concentration of DMSO was between 0.5 and 0.1% v/v.Metal concentrations were determined by ICP-MS.Rh complexes 1–9 were synthesised using a similar procedure.Typically, the ligand, was added to a dichloromethane solution of the rhodium dimer, 2 and the reaction mixture stirred at ambient temperature.The details for individual reactions are described in the experimental section.Complex 10 was synthesised by reacting the dimer with the TfEnH ligand in dichloromethane, and in the presence of triethylamine at ambient temperature overnight.The complexes were characterised by elemental analysis, NMR spectroscopy and mass spectrometry.The x-ray crystal structures of complexes 3–6, 8 and 9 were determined and will be reported elsewhere.Aquation of complexes 1–10 at 310 K was followed by 1H NMR over a period of 24 h. For complexes 1–3 and 10, only one set of peaks was observed.For complex 4–9, two sets of peaks were observed and assigned as the chlorido and the aqua species.Peaks for aqua species were assigned by comparison of the 1H spectra of 1–10 in D2O and the products of reactions between 1–10 with silver nitrate.After 24 h incubation, no apparent changes were observed by 1H NMR.Compounds 6 and 9 gave rise to precipitates after 24 h and 1H NMR spectra could not be recorded.The extent of hydrolysis is shown in Table 1.Changes in the 1H NMR chemical shifts of the methyl groups of cyclopentadienyl protons from the aqua adducts of complexes 1–4, 7 and 10 were followed over the pH* range from 2 to 12.The data were fitted to the Henderson–Hasselbalch equation.The pKa* values for complexes 1–4, 7 and 10 are shown in Table 1.Catalytic conversion of NAD+ to NADH using complexes 1–10 and sodium formate was followed by 1H NMR spectroscopy.In a typical experiment, 200 μL of complex, sodium formate and NAD+ were mixed in a 5 mm NMR tube, final ratio 1:25:2–9, complex : sodium formate : NAD+.The pH* of the reaction mixture was adjusted to 7.2 ± 0.2.1H NMR spectra at 310 K were recorded every 162 s until completion of the reaction.The turnover frequencies for the reactions were determined as described in the experimental section.The turnover frequencies for complexes 1–3 in D2O, increased in the order Cp *< CpxPh < CpxPhPh.The most active complex was +, with a TOF of 24.19 h− 1.The reaction was regioselective giving 1,4-NADH exclusively.On changing the ethylendiamine chelating ligand to N--4-benzenesulfonamide, the catalytic activity decreased.Furthermore, a decrease in the regioselectivity of the reaction was also observed with 7.5% of 1,6-NADH being produced.Complex 1 was less active than complex 2 in the reduction of NAD+ in 20% MeOD/80% D2O, following the same trend as observed for the reaction in D2O.However, the catalytic activity of complexes 1 and 2 in 20% MeOD was 2 × times higher than that in D2O.Complexes 4–9 were also studied for the reduction of NAD+ in 20% MeOD/80% D2O.Complexes containing Cp* were more active than those containing CpxPh, which, in turn, were more active than CpxPhPh complexes,.The reactions of complexes 4–9 were largely regioselective, giving only 1 to 10% of 1,6-NADH.Transfer hydrogenation reactions using complex 1, sodium formate and varying concentrations of NAD+ were studied.The turnover frequency remained the same with increasing concentrations of NAD+.For complex 10, the TOF was unaffected by the concentration of NAD+.A second series of experiments on the reduction of NAD+, varying the concentration of sodium formate, was performed.A notable increase in the catalytic activity was observed with increase of the hydride source,.From the plot of TOF vs formate concentration, typical Michaelis–Menten behaviour is observed.From the reciprocal of the TOF vs formate concentration, a maximum turnover frequency of 41.49 h− 1 and a Michaelis constant of 54.16 mM were calculated.The TOFmax and KM for complex were also determined by performing a series of experiments with different concentrations of sodium formate,Fig. 3.The pH* dependence of the catalytic reaction was also investigated via a series of experiments using complex 1, sodium formate and NAD+ in D2O.A dependence on pH* was observed.The highest TOF was achieved at pH* 7.3 ± 0.1, however the turnover frequencies between pH* 6 and 8 are similar.A decrease in the catalytic activity was observed when the pH* is higher than 9.NADH has previously been shown to be able to act as a hydride donor .As a consequence, experiments in which complexes 1–3 and 10 were reacted with NADH were performed, however, no hydride transfer from NADH was observed over a period of 10 h.Lactate formation via hydride transfer from formate to pyruvate catalysed by complexes 1–9 was followed by 1H NMR.In a typical experiment, 200 μL of complex, formate and pyruvate were mixed in a 5 mm NMR tube, final ratio 1:25:5, complex : formate : pyruvate.1H NMR spectra at 310 K were recorded every 162 s until completion of the reaction.Molar ratios of pyruvate and lactate were determined by integrating the signals of pyruvate and of lactate, Fig. 5.The activity of the complexes towards reduction of pyruvate to lactate is dependent on the nature of both the CpX ring and the N,N′ chelating ligand.Extended CpXPhPh rings result in an increase in the catalytic activity for complexes 1 and 2.However, there was a decrease in the TOF when the N,N′-chelated ligand is 2,2′-bipyridine or phenanthroline.Transfer hydrogenation reactions with complex +, sodium formate, and both NAD+ and pyruvate as hydride acceptors were performed.For these experiments, 4.5 mol equiv.of NAD+ and pyruvate were reduced using 25 mol equiv.of formate.The reduction of both NAD+ and pyruvate by complex 4 was observed, but the rate of reduction was slower for both substrates.In addition, pyruvate was reduced only when the reduction of NAD+ was almost complete.The IC50 values for Rh complexes 1–10 in A2780 human ovarian cancer cells were determined.Complexes 1–3 and 10 were inactive up to the maximum concentration tested.Complexes 4–9 were moderately active with IC50 values between 14 and 65 μM.The percentage cell survival of A2780 cells after incubation with complexes 1–10 and varying concentrations of sodium formate was determined.A significant increase of the antiproliferative activity of the Rh complexes upon addition of formate is evident.The largest increase was observed for complex +, with a decrease in cell survival of up to 50%.Ruthenium, rhodium and Iridium half sandwich complexes have previously been reported to catalyse the reduction of NAD+ via transfer hydrogenation using sodium formate as a hydride source .In the previous reports, Rh complexes generally displayed higher turnover frequencies than Ir, while Ru complexes have the lowest catalytic activity .However, the reported catalytic reactions were usually carried out under non-physiological conditions: high temperatures, very high concentrations of formate, pH ≠ 7.4 or with high concentrations of non-aqueous solvents .The complexes selected for the present work have all already been studied for the reduction of NAD+ under non-physiological conditions: +, developed by Steckhan and Fish , +, published by Süss-Fink , +, rhodium analogue of + studied in our group , and +.Noyori-type catalysts such as are some of the more successful catalysts for transfer hydrogenation reactions .Reduction of NAD+ using Noyori-type catalysts has been recently studied successfully by several groups .In our previous work, we observed that the nature of arene and Cp rings can have a significant effect on the catalytic properties of half-sandwich metal complexes.For example, iridium complexes containing Cpx with extended aromatic substituents had improved catalytic activity for the regeneration of NAD+ .With this in mind, complexes containing CpxPh and CpxPhPh rings were studied in the present work.Previous studies with related compounds have shown that the 16e– species or the aqua adducts of the complexes are the active catalysts in transfer hydrogenation reactions .However, in aqueous media the presence of water and pH has been shown to play a critical role .Aquation of the complexes 1–10 in 20% MeOD / 80% D2O was confirmed by comparing the 1H NMR spectra of D2O solutions of the complexes and solutions obtained after removal of the chloride ligand by reaction with AgNO3 to precipitate AgCl.Complexes 1–10 hydrolysed rapidly at 310 K, reaching equilibrium by the time the first 1H NMR spectrum was recorded.Consequently the turnover frequency for transfer hydrogenation is not affected by the use of the chlorido complex instead of the aqua adduct.Complete conversion of the chlorido complexes 1–3 and 10 to their aqua adducts was observed.However, complexes 4–9 reached equilibrium with 30–60% formation of the aqua species.This lower conversion would not be expected to affect the catalytic activity since the hydrolysis is fast, and the formation of hydrido species will shift the equilibria towards aqua adduct formation.pH* titrations to determine the pKa of the aqua adducts of complexes 1–4, 7 and 10 were carried out so that the nature of the complexes at pH 7.2 could be determined.The formation of hydroxido adduct would be expected to hamper the catalytic reaction since hydroxido ligands bind more tightly than the aqua ligands.The pKa* values of the aqua adducts determined by NMR pH* titrations, are all > 8.5.These complexes will therefore exist largely as the aqua complexes at pH values close to 7.4.First, we compared the catalytic activity for the reduction of NAD+ by + and with the arene-Ru analogues we studied previously +, and , .The catalytic activity of the Rh complexes was expected to be higher than that of their Ru analogues based on the literature. ,Complex 1 was up to 8.5 × more active than +,.However, complex 10 had TOF values in the same range as those of the Ru analogues .The TOF of complex 10 was 2 × lower than complex 1.This decrease in catalytic activity could be due to steric hindrance generated by the sulfonamide group.Despite the fact that half-sandwich complexes containing N--sulfonamides are known to display very high catalytic activity for hydride transfer reactions , our results are perhaps not surprising, since low catalytic activities have been previously reported with Noyori-type catalysts for the reduction of NAD+ compared with other Rh catalysts .Next a series of ethylenediamine-Rh complexes containing extended Cpx ligands was studied.A trend was observed in which the presence of a more electron-withdrawing Cpx ring gives higher catalytic activity.Accordingly, the complex + shows the highest activity followed by + and, in turn, both are more active than +.With 3, an improvement in the TOF of 24 × was achieved, compared with the ruthenium analogue +.This increase in activity can be attributed to the effect of a less electron-rich Cp ring.The more acidic metal centre may facilitate the coordination of negatively-charged formate, which can then undergo β-elimination to generate Rh-H and CO2.In order to compare the reactivity of Cp-Rh with their arene-Ru analogues, a set of experiments was performed using complexes 1 and 10: dependence of the reaction rate on the NAD+ and formate concentrations, reaction with NADH, and optimum pH* of the reaction.For reactions of + and and varying concentrations of NAD+, no significant alterations in the reaction rate were observed.The unchanged turnover frequency implies that the reaction rate does not depend on the NAD+ concentration.However, when the experiments were performed with increasing concentrations of the hydride source, the reaction rate increased, suggesting that formate is involved in the rate-determining step.This behaviour is similar to that of + previously observed by Fish et al. , with the immobilised tethered Cp*Rh complex-TsDPEN) where TsDPEN is-2-amino-1,2-diphenylethyl]-4-toluene sulfonamide) studied by Hollmann et al. and with + and studied in our laboratory .Plotting the turnover frequency against formate concentration shows a typical Michaelis–Menten behaviour.The maximum turnover frequency calculated from the double reciprocal of TON vs formate concentration for complex 1 is TOFmax = 41.49 h− 1, ca. 28 × higher than that of + .The TOFmax for complex 10 was 37.2 h− 1, only ca. 5 × higher than that of .At high concentrations of hydride, the complexes containing TfEn ligands are 5 × times more active than their ruthenium analogues.However, complex is still less active than the ethylenediamine complex +.The Michaelis constant for complex 10 indicates a weaker affinity for formate compared to complexes +, +, and .Compared with 2 + , complex 10 shows stronger affinity for formate.This implies that the low catalytic activity when using 25 mol equiv.of formate is due to reduced affinity for formate, but the catalytic activity increases markedly in formate-saturated solutions.Some iridium and ruthenium complexes such asCl]+,Cl] and) have been shown to oxidise NADH through transfer hydrogenation .In such processes, the NADH acts as a natural hydride donor, and the metal catalyses hydride transfer from NADH to other substrates such as quinones, pyruvate or oxygen .In this study, we investigated the possibility of oxidising NADH using the rhodium complexes 1 and 10.No oxidation of NADH occurred after 12 h at 310 K and pH 7.2 ± 0.2.In contrast to the above-mentioned compounds, the lack of reactivity with NADH is similar to the ruthenium complex .These experiments emphasise the critical influence of the ligands in half-sandwich complexes on the reactivity of the complex.The last experiment performed was to determine the optimum pH* for the reduction of NAD+.In line with our aim of applying the catalytic reduction of NAD+ in biological systems, we worked at pH 7.2.However, it is interesting to note the effect of pH on the reaction rate.Surprisingly, the maximum activity for complex 1 was observed at pH* 7.3 which is close to physiological pH. Furthermore, slight variations were observed over the pH range 5 to 9.At pH* > 9, the concentration of OH− inhibits the reaction due to the formation of the more inert Rh hydroxido adduct.This effect of pH was also observed for the ruthenium systems .In order to select the best catalyst candidate to work with, 3 well-known half-sandwich Rh compounds were studied for their catalytic activity towards the reduction of NAD+.We also studied the effect of extended phenylation of CpX rings on their TOF.These reactions were performed in 20% MeOD and 80% D2O to aid solubility.As we observed previously, there is an increase on the TOF for ethylendiamine-containing-complexes when using extended rings.Interestingly, there is a 2-fold increase in the TOF due to the effect of 20% MeOD.This effect was observed previously for the ruthenium complexes ] .The TOF of complexes 4–9 for the reduction of NAD+ decreases when using CpxPh or CpxPhPh as ligands.Those results are surprising, since the trend is opposite to that for complexes 1–3.These results may be attributable to steric factors due to the extended Cp ring and the size of the chelating ligand.The catalytic activity for the reduction of NAD+ is also dependent on the N,N′ chelating ligand.For complexes containing Cp*, the turnover frequencies increase in the order en < phen < bpy, whereas for complexes containing CpXPh or CpXPhPh, the TOFs increase in the order phen < bpy < en.Higher activity was obtained with +.However, the catalytic activity for the reduction of NAD+ ethylendiamine complex + is in the same range as that obtained with complex +.We initially studied the reduction of NAD+, but it is well known that this type of catalyst can also be used for the reduction of other molecules.Previously, the possibility of reducing biomolecules such as pyruvate was reported, using + as a catalyst .Therefore, we studied the catalytic reduction of pyruvate using complexes 1–9.Complexes 1 and 2 reduce pyruvate in the presence of sodium formate, and complete conversion to lactate was achieved.The complex containing the Cp* ring was less active than that containing the extended cyclopentadienyl ring.Similar to the reduction of NAD+, when using bipyridine and phenanthroline as chelating ligands, the catalytic activity decreases with extended Cp rings.Interestingly, the catalytic activity of compounds 6 and 9 was extremely low, and after 8 h less than 30% conversion was achieved.The complexes containing CpXPhPh were not able to reduce pyruvate effectively before decomposition, and the reactions cannot be considered as catalytic.The low catalytic activity of complexes 1–9 for the reduction of pyruvate was expected since both, pyruvate and formate, contain negatively-charged carboxylate groups which can compete with hydride for binding to Rh.Competition experiments were performed using complex +.For these experiments, 4.5 mol equiv.of NAD+ and pyruvate were reduced using 25 mol equiv.of formate.The experiment showed a clear preference of complex 4 for the reduction of NAD+.NAD+ was reduced completely but at a slight lower rate than when the reaction was performed without pyruvate, attributable to competitive binding of formate and pyruvate.Pyruvate was reduced only after the levels of NAD+ became very low.The reduction of pyruvate in the presence of NAD+ was extremely slow compared with the reduction of pyruvate on its own, with 50% reduction of pyruvate to lactate after 2 h 15 min.The antiproliferative activity of complexes 1–10 in A2780 human ovarian cancer cells was studied.IC50 values for complexes 1–3 and 10 were higher than 100 μM, while complexes 4–9 showed moderate activity, with IC50 values between 14 and 65 μM.Higher activities were obtained with the phenanthroline series = 17.8 ± 0.6 μM, = 14.68 ± 0.08 μM).Interestingly, in all cases lower IC50 values were obtained when using the CpxPhPh capping ligand, perhaps due to an increase in hydrophobicity and increased accumulation of the metal complex in cells , or due to the intercalation of the extended aromatic unit between nucleobases of DNA .Cell survival with complexes 1–10 upon addition of formate at concentrations of 0.5, 1 and 2 mM was also determined.The complexes show enhanced activity in combination with formate.Formate alone had no effect on cell viability.It seems reasonable to propose that co-administration of the complexes with formate gives rise to catalytic transfer hydrogenation reactions in the A2780 human ovarian cancer cells, although perhaps with few turnovers in view of the variety of nucleophiles in the cell which might terminate the reactions.In our previous work we have shown that transfer hydrogenation catalysts such as + or together with formate can reduce the levels of nicotinamide adenine dinucleotide in cells ."NAD+ is an important co-enzyme involved in maintaining the redox balance, the Kerb's cycle and other metabolic pathways such as synthesis of ADP-ribose, ADP-ribose polymers and cyclic ADP-ribose which are crucial for genome stability, DNA repair, and maintenance of calcium homeostasis .Changes in the cellular redox status play an important role in cell death.In particular, cancer cells, might be more sensitive to redox variations, since they are under constant oxidative stress due to high production of reactive oxygen species .Despite the higher catalytic activity towards the reduction of NAD+ of the Rh complexes 1–10 compared with the Ru complexes studied previously , the enhancement in antiproliferative activity of the Rh complexes is not as high as for their Ru analogues .The increase in catalytic activity for the reduction of NAD+ does not correlate with an increase in antiproliferative activity when the drugs are co-administered with sodium formate.The inconsistency between the catalytic activity and effects on cells might be due to various factors.The poisoning of the catalyst , for example.Ward et al. have studied the possibility of synthesizing Ir Noyori-type catalysts capable of reducing NAD+, quinones and ketones .In order to reduce poisoning due to sulphur-containing molecules, such as glutathione, the compounds were conjugated to biotin-streptavidine .The artificial metalloenzyme was able to carry out transfer hydrogenation reactions and reduce, to a certain extent, the effect of poisoning due to glutathione .The anticancer activity of the compounds may be due in part not only to the reduction of NAD+ but also other biomolecules.For example, in this work we have demonstrated that complexes 1–5 and 7–8 can reduce pyruvate, while compounds 6 and 9 cannot."Pyruvate is an essential component of cellular energy pathways and of special importance in cancer cells.It has been previously observed that cancer cells consume high levels of glucose and release lactate and carbon dioxide .This behaviour is linked to the malfunctioning of mitochondria.Cancer cells have a high rate of consumption of glucose due to the need to generate nutrients such as nucleotides and amino acids.However, a high rate of glycolysis, requires high levels of NAD+.Cancer cells regenerate NAD+ at a very fast rate by reducing pyruvate to lactate .In the presence of the Rh catalysts 1–5 and 7–8, the pool of pyruvate in cells may be reduced due to conversion to lactate.As a consequence, the regeneration of NAD+ and the anaerobic glycolysis in cancer cells will be disrupted.The reduction of pyruvate, in combination with the reduction of NAD+ will affect not only the redox regulatory system of the cell, but also the generation of ATP and formation of various nutrients necessary for cell growth and reproduction.During the last decade, there has been increasing interest in developing metal-based catalysts which are effective in cells.Such technology might be useful in areas such as protein labelling, imaging, and for the treatment of diseases.For example, Chen and Meggers have shown that metal compounds such as Pd2 or PF6 can be used to remove protector groups from N-protected anticancer drugs .Cowan et al. have shown that Cu/Ni-ATCUN compounds can decompose RNA in hepatitis or HIV .Also organo-ruthenium compounds can oxidise glutathione catalytically .In the current work, we have studied the possibility that the Rh complexes may be more active catalysts than their Ru arene analogues.Four series of compounds which can reduce NAD+ to NADH using formate as a hydride source, have been investigated under biologically-relevant conditions.The catalytic activity decreased in the order of N,N-chelated ligand bpy > phen > en with Cp* as the η5-donor.However, while the ethylenediamine-containing compounds became more active with extension to the CpX ring, we observed a decrease in catalytic activity for N,N-chelated phenanthroline- and bipyridine– compounds, perhaps due to an increase in steric hindrance.The complex + showed the highest catalytic activity towards the reduction of NAD+, with a TOF of 37.4 ± 2 h− 1.The catalytic activity of complexes + is up to two orders of magnitude higher than for their Ru analoguesRuCl] + = 0.85 h− 1).Interestingly, Noyori type Rh compounds such as 10 with a turnover frequency of 4.12 h− 1, display no marked improvement compared to the Ru arene analoguesRuCl]).Mechanistic studies on these Rh complexes were carried out in order to investigate the catalytic cycle.Fast hydrolysis of the chlorido complexes 1–10 was observed by 1H NMR.Compounds 1–3 and 10 gave quantitative conversion to aqua adducts, while complexes 4–9 reached equilibrium with a 30–60% of the aqua species present.The pKa* values determined for the aqua adducts of ca. 8–10 indicate that the Rh complexes are likely to be present in their aqua forms around pH 7 rather than forming less reactive hydroxido species.Complexes 1 and 10 showed a dependence of the reaction rate on formate concentration, but not on NAD+ concentration.No reaction between complexes 1–4 and 1,4-NADH was detected after 10 h. Optimum pH* for the reaction with complex 1 was 7.3 ± 0.1, close to physiological pH. These results indicate that these Rh Cpx compounds exhibit similar behaviour to their Ru arene analogues.The maximum turnover frequencies of Rh complexes 1 and 10 of 41.5 and 37.2 h− 1, respectively, at high concentrations of formate are significant improvements in catalytic activity compared to the Ru analogues, even for complex .This improvement in the maximum turnover frequency might be expected to translate in significant enhancements in antiproliferative activity towards cancer cells when the complexes are used in combination with high concentrations of formate.We also demonstrated that some Rh complexes, notably 1–5, 7 and 8, can catalyse the reduction of pyruvate to lactate using formate as the hydride donor.Such reactions might occur in cells.However complexes 6 and 9 with very low TOFs are not effective catalysts.The transfer hydrogenation reactions were shown to be greatly affected by the chelating ligand and the capping Cpx ring.The catalytic activity of ethylenediamine compounds increases for extended Cpx rings, while the bipyridine and phenanthroline compounds show the opposite trend.Studies of competition reactions between NAD+ and pyruvate for reduction by formate catalysed by complex 4 suggested a clear preference for the reduction of NAD+, although, some lactate was still formed.The antiproliferative activity of the Rh complexes towards A2780 human ovarian cancer cells increased by up to 50% when administered in combination with formate.However, the improvement in the activity of these Rh complexes induced by formate is much lower than in the case of Ru complexes.It is possible that Rh centres, being more reactive than Ru, are more easily poisoned in the complicated mixture of biomolecules present in culture media and in cells.
Organometallic complexes have the potential to behave as catalytic drugs. We investigate here Rh(III) complexes of general formula [(Cpx)Rh(N,N′)(Cl)], where N,N′ is ethylenediamine (en), 2,2′-bipyridine (bpy), 1,10-phenanthroline (phen) or N-(2-aminoethyl)-4-(trifluoromethyl)benzenesulfonamide (TfEn), and Cpx is pentamethylcyclopentadienyl (Cp∗), 1-phenyl-2,3,4,5-tetramethylcyclopentadienyl (CpxPh) or 1-biphenyl-2,3,4,5-tetramethyl cyclopentadienyl (CpxPhPh). These complexes can reduce NAD+ to NADH using formate as a hydride source under biologically-relevant conditions. The catalytic activity decreased in the order of N,N-chelated ligand bpy > phen > en with Cp∗as the η5-donor. The en complexes (1-3) became more active with extension to the CpX ring, whereas the activity of the phen (7-9) and bpy (4-6) compounds decreased. [Cp∗Rh(bpy)Cl]+ (4) showed the highest catalytic activity, with a TOF of 37.4 ± 2 h- 1. Fast hydrolysis of the chlorido complexes 1-10 was observed by 1H NMR (< 10 min at 310 K). The pKa∗values for the aqua adducts were determined to be ca. 8-10. Complexes 1-9 also catalysed the reduction of pyruvate to lactate using formate as the hydride donor. The efficiency of the transfer hydrogenation reactions was highly dependent on the nature of the chelating ligand and the Cpx ring. Competition reactions between NAD+ and pyruvate for reduction by formate catalysed by 4 showed a preference for reduction of NAD+. The antiproliferative activity of complex 3 towards A2780 human ovarian cancer cells increased by up to 50% when administered in combination with non-toxic doses of formate, suggesting that transfer hydrogenation can induce reductive stress in cancer cells.
462
Competition between copper and iron for humic ligands in estuarine waters
Iron and copper occur complexed with organic matter in waters from estuarine and oceanic origin in spite of competition by the major cations that occur at concentrations typically 106 times greater.Metal complexation is important because it affects the metal geochemistry and bioavailability.The main removal pathway of freshly added metals in estuarine and coastal waters is by scavenging with suspended particulate matter, in addition to biological uptake in ocean waters.Dissolved complexation reactions are in competition with the removal processes.Complexation also affects availability to microorganisms leading to feedback reactions, as exemplified by releases of exopolysaccharides from marine bacteria, and other ligands.Importantly, the solubility of Fe is enhanced by organic complexation causing the element to remain in solution allowing more time for uptake by microorganisms.Copper speciation in seawater is dominated by organic complexation.Thiols have been identified as one type of Cu-binding ligands and humic substances are likely another ligand.Cu-binding ligands and thiols have been shown to emanate from pore-waters into shallow surface waters.Cu-HS species have recently been shown to occur in estuarine waters.The complex stability is a function of the stability constant, which is conditional upon side-reactions of the ligand with competing cations, and the ligand concentration.Natural waters containing ligands of various sources, contain a mixture of ligands that form strong as well as weak complexes, that can be crudely subdivided into ligand classes.Log K′CuL values vary between ligand classes, typically ranging from log K′Cu′L = 8–10 for weak ligands to as high as log K′Cu′L = 15 for strong ligands).Suwannee River humic acid gives a log K′Cu′SRHA = 10.7 in seawater.Estuarine waters have been reported to have ligands for Cu with a complex stability of log K′Cu′L ranging from 11–16 encompassing that of HS but also several thiol compounds.The speciation of iron is, like Cu, dominated by organic complexation and it tends to occur > 99% complexed with organic matter in sea and estuarine waters.Siderophores-binding ligands secreted by bacteria) and humic substances have been reported as Fe-binding ligands.Certain natural ligands, such as domoic acid, are known to be complexed with both copper and iron in seawater suggesting that metal competition could play a role.It has been suggested that Fe uptake by Pseudo-nitzschia is regulated by both domoic acid and copper, which could possibly be explained by competition reactions.Speciation measurement is generally by cathodic stripping voltammetry making use of competition between an added ligand, which forms an electroactive complex, and natural complexing matter.Using this technique the concentration and complex stability of natural complexing ligands of Cu and Fe have been determined in estuarine, coastal and ocean waters.Competition between the metals for ligands occurring in natural waters has not been demonstrated though competition has been shown between Fe, Cu, Al and Co for complexation with Suwannee River humic substances added to seawater.It is possible to determine specific ligands in seawater on the basis of the specific CSV response of their metal species.This has been used to identify Cu binding thiols, Cu and Fe binding humic substances, and various sulphur species.Using the ligand competition method it has not been possible to investigate competition between metals for natural complexing ligands because the added competing ligand affects the speciation of all metals.Here we make use of the signal for specific ligands to investigate competition between Cu and Fe for these ligands in estuarine and coastal waters.Concentrations of copper and iron binding ligands were determined separately using ligand competition techniques and from the signal for Cu-HS and Fe-HS.Voltammetric apparatus was a μAutolab-III potentiostat connected to a hanging mercury drop electrode.The reference electrode was Ag/AgCl with a 3 M KCl salt bridge, and the counter electrode was a glassy carbon rod.The stirrer was a rotating PTFE rod.GPES software was used to control the instrument.Apparatus used for Cu detection used nitrogen for oxygen removal, whereas apparatus used for Fe and Fe-speciation was pressurised using air in order to ensure that the concentration of dissolved oxygen was constant during the measurements.The software was changed to discard 2 mercury drops between scans.Water used for rinsing and dilution of reagents was purified by reverse osmosis and deionisation.Glass and PTFE voltammetric cells used for total metal determination were cleaned using 0.1 M HCl and rinsed with deionised water followed by UV-digested sample before measurements.Vessels used for titrations were MQ-rinsed about once a week but were not normally rinsed between titrations to minimise de-conditioning.pH measurements were calibrated against pH 7 and pH 4 standards on the NBS pH scale.The reference section of the combined pH electrode was filled with 3 M KCL.Total dissolved metal concentrations were determined by CSV after 1 h UV-digestion of acidified samples− 1) either in 30-mL PTFE-capped quartz sample tubes using a 125-W UV system or in the voltammetric cell with a horizontal UV lamp.pH neutralisation was by addition of ammonia and borate pH buffer.UV absorbance of dissolved humic matter was measured on a Jenway 7315 spectrophotometer set to 355 nm in polystyrene cells of 1 cm path length.Background correction was against UV seawater.The absorbance of each station was compared to a calibration curve of HA standards to quantify the HS in samples, similar to that used for chromophoric dissolved organic matter.Cu and Fe standard solutions were atomic absorption spectrometry standard solutions diluted with MQ water; HCl was added to a pH of 2.Typically 20 mL was prepared of these solutions, which were stable and were replaced only when the level ran low.An aqueous stock solution containing 0.1 M salicylaldoxime was prepared in 0.1 M HCl.Reference humic and fulvic acid used for calibrations were Suwannee River HA Standard II 2S101H) and FA, which were dissolved in MQ water to a concentration of 0.1 g L− 1 and stored in the dark at 4 °C when not in use.1 M sodium bicarbonate in MQ water was diluted to 2 mM and used to dilute seawater to lower salinity.A pH buffer containing 1 M boric acid and 0.35 M ammonia was UV-irradiated for 45 min to remove organic contaminants.A bromate stock solution containing 0.4 M potassium bromate was used for the determination of Fe-binding HS.Contaminating metals were removed from the buffer and bromate solutions by overnight equilibration with 100 μM MnO2 and then filtered; 100 μL of the buffer in 10 mL seawater gave a pHNBS of 8.18.All sample containers were cleaned in 3 steps: first by soaking for 1 week in 1% detergent in warm MQ-water, followed by soaking for 1 week in 1 M HCl, and finally by soaking at least 1 week in 0.1 M HCl.Containers were then rinsed in MQ-water and stored partially filled with 0.01 M HCl.Samples from the Mersey Estuary and Liverpool Bay were collected using a peristaltic pump.The water inlet tubing was held away from the vessel, the RV Marisa, and the water was used to first rinse and then fill a 5-L high-density polyethylene container at each station.The suspended matter was allowed to settle overnight in the laboratory and the supernatant water was filtered through an in-line 0.2 μM filtration cartridge using a peristaltic pump, and stored in the dark at 4 °C in 0.5 L HDPE bottles.Acidified sample aliquots were UV-digested in a PTFE voltammetric cell with the lamp placed immediately above the sample in a PTFE housing separated by a sheet of quartz from the sample, prior to pH neutralisation with ammonia and pH buffering using the borate/ammonia buffer.Fe and Cu were determined by CSV as optimised previously.CSV conditions for Cu were a deposition potential of − 0.15 V, using a 1-s potential-jump to − 1.3 V to desorb any residual organic matter and Fe species, followed by 9 s equilibration at − 0.15 V prior to the voltammetric scan to − 0.7 V mode, modulation time 40 ms, modulation amplitude 50 mV, step potential 5 mV, interval time 0.1 s) in the presence of 20 μM SA.CSV of Fe used the DP mode in the presence of 5 μM SA, with an adsorption potential of 0 V followed by the scan to − 1 V.The concentration of Cu-HS was determined after addition of sufficient copper to saturate the HS followed by detection of the Cu-HS by CSV.Calibration was by internal standard additions of SRHA.Low salinity samples with very high HS concentrations were diluted to 10 or 50% in UV-SW to increase the linear range.The deposition potential was + 0.05 V, with a deposition time between 10 and 60 s depending on the concentration of HS.The quiescence time was 9 s and scans were initiated from 0 V and terminated at − 0.75 V.The scanning parameters were differential-pulse mode, modulation time 40 ms, modulation amplitude 50 mV, step potential 5 mV and interval time 0.1 s.A background scan using a one-second deposition time was subtracted from the analytical scan to eliminate the peak for inorganic Cu adjacent to the Cu-HS peak.The concentration of Fe-HS was determined by CSV in the presence of bromate.The voltammetric apparatus was air-pressurised as for the Fe determination, as the response for Fe-HS was not affected by the DO.Bromate was added to improve the sensitivity for the Fe-HS.The concentration of Fe-HS was determined after addition of sufficient Fe to saturate the HS followed by detection of the Fe-HS complexes by CSV.Calibration was by standard additions of SRHA.Samples with high Fe and humic concentrations were diluted with UV-SW 10 or 50% to get the concentrations in the linear range.The deposition potential was 0 V, and the deposition time was between 20 and 120 s, depending on ambient dissolved Fe and Fe-binding humic concentrations.The concentration of Cu and Fe complexing ligands in the samples was determined in separate titrations with ligand competition against SA.Approximately 150 mL sample was transferred to a 250-mL low-density polyethylene bottle, 0.01 M borate buffer and 1, 2 or 10 μM SA was added for Cu.10 mL aliquots of the solution were pipetted into 14 25-mL PFA vials with lid.Cu was added to each vial in steps of progressively increasing concentration from 0 nM to 150 nM.These were then left to equilibrate overnight.The labile Cu concentration in each cell was then determined by CSV using 30 s adsorption.The deposition potential was − 0.15 V, followed by a 9-s quiescence period and the scan initiated at the same potential.The scan was in DP-mode as for total Cu.The concentration of Fe complexing ligands was determined by titration with Fe in the presence of 5 μM SA and borate pH buffer using a method modified from that before.The 10-mL sample aliquots were equilibrated in polyethylene tubes, containing Fe additions to give a range 0 to 70 nM Fe.Two of the tubes were 0 added Fe.The iron and natural complexing ligands were allowed to equilibrate 10 min at room temperature.5 μM SA was then added to the aliquots, which were left to equilibrate overnight prior to the determination of labile Fe by CSV in a PTFE voltammetric cell.Vials used for the titrations were conditioned typically 3 times prior to a titration by setting up and discarding the titration about 3 h later or after overnight equilibration.The third one was measured and was repeated to check for further improvement.The voltammetric cell was conditioned 3 times with seawater with SA without added Cu or Fe prior to the start of a titration.Data interpretation was using the van den Berg/Ruzic linearization procedure.A first estimate for S from the last three data points was improved using the modelled linearisation, with correction of the sensitivity for under-saturation of L.Comparative calculations were carried out using MCC software which fits the data simultaneously to several fitting methods, linear and non-linear and also corrects for under-saturation of L.The complex stability of Fe-SA was calibrated at several salinities by monitoring by CSV the concentration of Fe-SA as a function of the concentration of SA.UV-SW was diluted with MQ, containing 2 mM HCO3−, to achieve salinities of 4, 11, 20, 26 and 35.10 mL of the water was pipetted into the voltammetric cell with 10 mM borate buffer, 5 μM SA and 5 nM Fe.The water was air-purged and SA was added in steps from 1 to 100 μM SA.A 5-min reaction time was allowed after each addition and 3 scans were made using an adsorption time of 120 s.The CSV signal for Fe-HS was obtained as before: 10 mL seawater was put in the voltammetric cell and 100 μL borate buffer, 100 nM Fe and 1 mL bromate were added.Cu additions were then made whilst monitoring the response for the Fe-HS species using an adsorption time of 30–60 s. Five minutes equilibration time was allowed after each copper addition.Repeated measurements with zero-added copper were used to obtain a value for the initial peak height of the experiment.The concentration and complex stability of Cu and Fe complexes with ligands in seawater is determined by ligand competition against SA.The complex stability for Cu with SA has been calibrated over a salinity range and was used in this work: log K′CuSA = 10.12–0.37 log Sal, and log B′CuSA2 = 15.78–0.53 log Sal.Similar to Cu, two SA-species are also known to exist for Fe with SA, but in the case of Fe only one of these, FeSA, is electroactive.The complex stability of FeSA and FeSA2 has been calibrated for seawater of salinity 35 but not yet for estuarine waters at lower salinity.For this reason values for of K′FeSA and B′FeSA2 were calibrated here at salinities between 4 and 35.Values for K′FeSA and B′FeSA2 were fitted to the response for 5 nM Fe in UV-digested seawater diluted to several salinities and in the presence of SA at concentrations between 1 and 100 μM.The data was expressed as a ratio, X, of the actual response over the maximum response: X = ip/ipmax.The constants were used to calculate the overall α-coefficient for Fe with SA including and used to obtain complex stability for FeL in this study.Values for these α-coefficients are compared to values obtained using the original constants at two concentrations of SA and four salinities in Table 2.Cu and Fe complexing ligands were determined by CSV with ligand competition against SA.The ligand concentrations are on basis of Cu and Fe-equivalents.Concentrations of Cu-HS and Fe-HS were determined by CSV and calibrated using SRHA on the mg/L scale.HS concentrations were found to be stable for several months when stored in the dark at 4 °C.HS in STN 6 was measured in May 2013, Sept 2013 and Feb 2014 using voltammetry and found to be 0.90, 0.90 and 0.89 mg L− 1 respectively.STN 6 was also measured using UV spectrophotometry in May 2014 giving 0.91 mg/L.All samples measured using UV spectrophotometry in May 2014 were within 11% of the values measured by voltammetry a year earlier.Some Cu-complexing capacity titrations showed the possibility of the presence of two individual ligand classes but due to the high organic matter content obscuring the CuSA peak at low total Cu, the initial titration points were often difficult to measure and were unreliable.For this reason, it was only possible to accurately measure total ligand rather than separate ligand classes.The Cu-complexing ligands, Cu-HS and dissolved Cu follow the same pattern: decreasing with increasing salinity towards much lower concentrations in the seawater endmember.The seawater end-member sample was taken in Liverpool Bay about 15 mile from the mouth of the estuary, where the water was clear and contained much less suspended matter than in the estuary, and had a dissolved Cu concentration of 11.6 nM.The concentrations of Cu-HS had previously been determined in the same samples and are compared here to Fe-HS and the complexing ligands of Cu and Fe.The concentration of LCu was greater than the Cu concentration in all samples, and therefore likely in control of the geochemistry of Cu.The pattern for Fe-HS was similar to the Cu species in that , LFe and co-vary, and with a ligand concentration greater than the dissolved .However, there are differences in the specific distributions.The concentrations of HS found after complexation with Cu were the same as those of Fe-HS as evidenced from a plot of versus which has a slope of 1.04 ± 0.1.The good agreement between these two independent measurements suggests that the same HS is detected, and the similarity of the UV-HS values suggests that approximately the total concentration of HS is detected in spite of possible competition between metals of the UV method at salinity 32).The ligand concentrations for Cu and Fe were similar but not identical, the concentration of LCu generally being smaller than that of LFe.A plot of one against the other has a significant intercept on the axis for LCu, at zero LFe.This finding suggests that a second ligand may be present for Cu, amounting to ~ 15 nM at the high salinity end, and which is in addition to the Cu-binding HS.This second ligand was not apparent in the titrations, as the CuSA peak at low was difficult to measure.We have not been able to identify the second ligand experimentally, but thiols are known to occur in seawater and act as ligand for CuI.The data point for LCu at the highest concentration of LFe deviates from the linear relationship: a possibility is that competition has played a role here as the concentration of LCu may have been underestimated at the high concentration of Fe in this sample.The concentrations of Cu-HS and Fe-HS, which had been calibrated by additions of SRHA, were converted to the nanomolar scale by multiplication with their metal binding capacity for the SRHA reference material.The Cu-binding capacity of this particular batch of SRHA is 18.0 ± 0.4 nmol/mg SRHA, whereas the Fe-binding capacity is 30.6 ± 0.6 nmol/mg SRHA nearly the same as that found previously for a different batch of SRHA.Because the ligand concentrations are greater than the metal concentrations, it is likely that the metals and ligands co-vary, potentially the ligands controlling the estuarine geochemistry of the metals.This was tested by plotting the metal concentration as a function of the concentration of the ligands and the HS.The data shows that the Cu-binding ligand concentration is greater than dissolved Cu in all stations and the dissolved Cu varies linearly as a function of Cu-HS and LCu.The Cu complexing ligand concentration are similar to the copper concentration: /LCu = 0.92 ± 0.06, and the concentrations of Cu-HS are systematically less than the concentration of Cu-binding ligands: /LCu = 0.69 ± 0.05.This indicates that, although the estuarine HS constitute the majority of the Cu-binding ligands, other ligands are able to complex the Cu as well.The concentration of LFe was virtually the same as that of Fe-HS and both are greater than , indicating that there was an excess of the ligand concentration and that almost the entire ligand concentration can be ascribed to HS of a similar nature to SRHA.A plot of Fe as function of LFe is straight, indicating that the Fe and L co-vary but the ligand concentration is always greater than .A diagram of as a function of LCu has an intercept on the X-axis of 11 nM indicating that a major component of the Cu binding ligands is different from HS.This is perhaps not surprising as thiols in estuarine waters are a known ligand for Cu, binding it as CuI and with only weak complex stability with divalent or trivalent metals: they are therefore detected as Cu-binding ligands by competition against SA, but not as Fe-binding ligands.The data points for are nearly the same as those for causing them to be superimposed in Fig. 4B.A diagram of as function of LFe shows a linear relationship with a slope of near unity and an intercept that is < 1 nM ligands confirming that nearly the entire ligand concentration for Fe consists of HS.On this basis it is possible to calculate the amount of Fe that can be bound by estuarine HS from the ratio of LFe/, which gives a value of 30.3 ± 1.0 nmol/mg HS.This value is the same as the ratio at which Fe is bound by SRHA indicating that the HS occurring in estuarine waters and the SRHA have the same binding capacity for Fe on the mg/L scale.The measurements of HS in samples from the Mersey estuary using Fe and Cu indicated the presence of Fe-HS and Cu-HS species: this means that the HS is being complexed with Cu as well as Fe, suggesting that competition is possible between these metals for complexation with the HS.Competition between Cu, Co and Al with Fe has previously been demonstrated for SRHA and SRFA.Because the HS are the sole ligand for Fe in these waters, it is possible to determine competition between Cu and Fe for these ligands from the effect of copper additions on the response for the estuarine Fe-HS.This competition method has previously been used to determine the stability of Cu complexation with SRHA and SRFA and the principle of the method is the same.The data was interpreted using the theory developed previously modified for the use of marine HS by using the ligand concentrations for the mass balance required for the competition modelling, to get a value for the total ligand concentration.The difference between LFe and is < 5% so this introduced only a minor change.Curve fitting was used to fit a K′Cu′HS value to all data simultaneously after initially calculating a value for each data point.The concentration of Fe-HS was found to decrease in response to the copper additions suggesting that Cu caused Fe to be released from Fe-HS as a result of competitive complexation.The copper additions were over a large range from nM to μM: the high range was necessary to obtain a good data fit, but was responding already at Cu additions in the nM range, indicating that the effect is of importance at the concentrations typical for estuarine conditions.The complex stability for the Cu-HS calculated from the competition titrations, is summarised in Table 5.These are an order of magnitude lower than the complex stabilities for total Cu ligands in the samples, further supporting the presence of another, stronger, ligand for Cu, such as thiols.The change in B′Fe′SA2 as a function of Sal was much greater than in K′Fe′SA and was therefore fitted using a log–log function).Values calculated for K′ and B′ at salinity 35 using these equations match the previous values at the same salinity: the reference value for log K′Fe′SA is 6.52, compared to 6.55 from Eq., and the reference value for log B′Fe′SA2 is 10.72, compared to 10.67 using Eq.The stability of the 1:1, Fe-SA, complex was found to increase a small amount when the salinity was decreased from 35 to 4, whereas the value of log B′Fe′SA2 increased from 10.7 to 12.0 over the same salinity range.Previously it was thought that the CSV response for Fe was based on the concentration of FeSA2, whereas recent data shows that FeSA is the adsorptive species and formation of FeSA2 is the cause for a decrease in the sensitivity− 1) when increases above the optimal concentration.The lower salinity data in this work confirms that scenario: the relative sensitivity can be seen to decrease with increasing at each salinity tested when is greater than an optimal concentration.This optimal concentration can be seen to move to a lower with decreasing salinity, whilst at the same time the maximum sensitivity at each salinity also decreases.This decrease in the maximum sensitivity is somewhat counter-intuitive as it might be expected that the complex stability and therefore the complexation of Fe by SA should increase with decreasing salinity due to less competition by the major cations.The values for K′Fe′SA and B′Fe′SA2 do indeed increase with decreasing salinity indicating that the complexation of Fe by SA, and its complex stability, increase with decreasing salinity.However, the overall sensitivity diminishes because the stability of the FeSA2 species increases more rapidly than that of FeSA.The comparative change means that the FeSA species is increasingly outcompeted by the FeSA2 species at lower salinity, which explains the shape of the sensitivity curves.The maximum response at low salinity is obtained at = 1 μM whereas the response is greatest at 5 μM SA at a salinity of 35.The concentration of SA was kept constant at 5 μM SA in this work to standardise the analyses.The concentrations of Cu-HS and Fe-HS were measured here independently by voltammetry using either the peak for the Cu-HS or the Fe-HS species.The results shown were calibrated against SRHA and separate tests showed that the sensitivity was the same using SRFA: the mg/L concentrations of HS are therefore independent on whether SRFA or SRHA is used.A difference between the two species is that the Cu-HS was determined after saturation with Cu and the Fe-HS after saturation with Fe.The good agreement in the data shows that either method can be used to determine marine HS.The peak for Cu-HS is near to the peak for free Cu′, which requires careful elimination by subtracting a blank scan and by minimising the Cu addition.Detection of Fe-HS does not suffer from this complication and is therefore arguably easier to determine, but the Fe-HS peak could be unstable due to precipitation of the excess inorganic Fe.The limit of detection of the Fe-HS method was 3 μg/L HS using a 240 s adsorption time in seawater.A higher LoD of 100 μg/L has been quoted for the CuHS.Conversion of the Fe-HS and Cu-HS from the mg/L to the nM scale would be simple if there was a single metal binding capacity of the HS.Our data shows that there are differences in the numbers of Cu and Fe binding sites on SRHA: 18 nmol Cu/mg SRHA compared to ~ 31 nmol Fe/mg SRHA.SRFA binds at a different ratio from SRHA: the ratio for Cu is the same as on the SRHA, whereas the number of Fe-binding sites on SRFA is approximately half that on the SRHA: 17.6 nmol Fe/mg SRFA, compared to 32 nmol Fe/mg SRHA.The SRHA binds nearly twice as much Fe as Cu, which might suggest that the Fe-HA complex is 2:1 whereas the Cu complex is 1:1, or that two sites are available for Fe on the SRHA.The SRFA data is then consistent with a 1:1 species for Fe-SRFA and Cu-SRFA.Not knowing whether the HS in estuarine waters are like FA or HA complicates the conversion of FeHS to the nanomolar scale, necessary to compare to the ligand concentrations.The nanomolar concentrations in Table 4 were calculated using the ratios for SRHA.The nanomolar concentrations of the Cu-HS and Fe-HS in Table 4 differ because of the different complexation ratios.The data in Fig. 5B was plotted using a binding capacity of 30.6 nmol Fe as valid for SRHA.The slope of nearly unity confirms that the Fe-binding ratio of the marine HS of 30.3 ± 1 nmol Fe/mg HS is nearly the same as that for SRHA, indicating that the HS in these waters behaves as HA rather than FA which has a lower binding ratio of near 17 nmol Fe/mg FA.The concentration of HS is decreasing with increasing salinity, largely in-line with what is known about HS.Extrapolation of the concentration of HS in the Mersey waters to a salinity of 35 gives an estimated value of 0.05 mg HS L− 1, which is, coincidentally in view of the noise in the data, the same as that found for residual HS in the Amazon outflow.A value of 0.2 mg HS L− 1 is found by extrapolation of the Cu-HS data.Ligand concentrations had been determined by metal titration in presence of SA, which therefore constitutes a method that is independent of the HS determination.The concentration of the iron binding ligand was > in all samples: /LFe = 0.81 ± 0.02.This makes sense as any Fe in excess of the ligand concentration would tend to precipitate due to the low solubility of inorganic Fe) until the remainder is kept in solution by the excess of ligands.The plot of Fe as function of LFe is linear showing the strong effect of organic complexation on the geochemistry of Fe found previously for estuarine waters.The Cu concentration is also controlled by complexation: /LCu = 0.92 ± 0.06 for stations collected in May 2013, which is greater than the ratio found for the high salinity end member collected in May 2014.Comparison with the concentrations of Fe-HS and Cu-HS shows that the Fe-HS constitutes all the ligands for Fe whereas a much larger intercept on the ligand-axis is obtained for Cu suggesting that a significant proportion is not from HS.The concentration of Cu-binding HS was on average 0.69 ± 0.05 × LCu, indicating that on average about 30% of the Cu binding ligands is not from HS origin.Again there was a discrepancy in the high salinity end member value, where the concentration of Cu-binding HS was 0.3, much lower than the ratio of 0.7 determined at the other stations the previous year.Other than the end member station, the copper binding ligands are largely, but not exclusively, from humic origin.The discrepancy makes sense as thiols also occur in estuarine waters, are a ligand for Cu and apparently tend to account for a higher percentage of Cu ligands at higher salinity.Although it was not possible to accurately model two ligand classes for these samples, we are currently working on samples where we can distinguish between thiols and humics as competing ligands for Cu in estuarine water.Co-variation of the Cu concentration with complexing ligands, and the effect on the Cu geochemistry, has been found before in waters from the Scheldt estuary and elsewhere.The average value of log K′Fe′L between salinity 18.8 and 32.2 was 11.2 ± 0.1, similar to that found for SRHA added to seawater and also similar to the complex stability of ligands in coastal waters.Other work on estuarine waters found log K′FeL varying between 11.1 and 13.9, the highest values greater than found here but which could be due to measurements at much lower salinity.The competition experiments demonstrated competition between Cu and Fe for HS in the marine environment.The competition data were used to get a value for the complex stability of Cu with the HS: an average value of 10.6 ± 0.4 was found for logK′Cu′HS, which compares to a value of 11.5 ± 0.3 for logK′Cu′L from the Cu-complexing ligand titrations against SA.There is therefore a significant difference between the value for K′Cu′HS and K′Cu′L which must be due to greater stability of complexes with other organic ligands.It can be speculated that other species of Cu may be thiols which are known to form more stable complexes with complex stability values of 12–14.The value of logK′Cu′HS from the competition data is very similar to that for Cu complexation with riverine humics confirming that the HS in the Mersey behaves similar to SRHA.The new values for K′Fe′SA and B′Fe′SA2 obtained at salinities between 4 and 35 give a significantly larger value for the α-coefficient for complexation of Fe by SA than calculated based on the previous values for B′Fe′SA2 alone.Calculation of α-coefficients and comparison to that calculated using the previous complex stability shows that the difference is a factor of ~ 9 at a salinity of 2 at 30 μM SA, whilst the difference at higher salinity is less but still important.It is of special importance to use the correct stability constants when the detection window is varied as the ratio of the new and old α-coefficients varies with the concentration of SA: the ratio changes from 7.3 to 1.6, or from 11.9 to 9.4 when is raised from 5 to 30 μM.Using the old constants would lead to a shift in the detected K′Fe′L with the detection window, that could be incorrectly attributed to the presence of more than one ligand.The co-variation between Cu and Fe with the ligand concentration shows that the geochemistry of both metals in the Mersey estuary is controlled by organic complexation, as has been found previously for Cu and Fe in estuarine and seawater.The similarity of the concentrations of Fe-binding HS to the ligand concentration shows that nearly the entire ligand concentration is represented by HS.The concentration of Cu-binding HS is less than the Cu-binding ligand concentration indicating that a second ligand plays a role for Cu.The competition experiments show that complexation of Fe by ligands is affected by competition by Cu.The concentration of HS found as Cu-HS and Fe-HS is the same.This means that the HS that is binding these metals is the same.This is confirmed by the competition experiments that show that Cu competes with Fe and displaces it from Fe-HS when added in sufficient quantity.Competition occurs when > because of the similarity of K′Fe′HS and K′Cu′HS.This competition may cause the complexation of Fe with HS to be carefully balanced.The excess of Fe over the ligand concentration in all samples causes the to diminish with time due the low solubility of inorganic Fe as the water travels through the estuary and beyond.The ligand concentration in the high salinity end member is only slightly larger than the Fe concentration.Competition by Cu, and variations therein due to variations in the concentration of thiols, could vary the amount of HS available for complexation with Fe and could therefore affect the geochemistry of Fe.Similarly, the competition by Cu can be expected to affect the availability of Fe to microorganisms in seawater if this relationship plays a role in open sea conditions.Extrapolation of the concentration of Fe-HS to an ocean salinity of 35 gives a residual level of 0.05 mg HS L− 1, equivalent to Fe-binding ligand concentration of 1.5 nM, which is similar to Fe-binding ligand concentrations found in ocean waters.Extrapolation of Cu-HS gives a residual level of 0.2 mg HS L− 1 in seawater equivalent to a Cu-binding ligand concentration of 3.6 nM, which is comparable to levels found in ocean waters.Further work is required to confirm whether HS is an important ligand for Cu as well as Fe in ocean waters, and whether terrestrial HS is transported from land to the ocean.
We determined the concentration of iron- and copper-binding humic substances (Fe-HS and Cu-HS) in estuarine waters along with the concentrations of iron- and copper-complexing ligands (L<inf>Fe</inf> and L<inf>Cu</inf>). Suwannee River humic acid (SRHA) was used as a humic standard. The complex stability of Fe with salicylaldoxime (SA) was calibrated for salinities between 4 and 35 and fitted to linear equations to enable Fe speciation in estuarine waters: K'<inf>Fe'SA</inf>=-2.98×10<sup>4</sup>×Sal+4.60×10<sup>6</sup> and log B'<inf>Fe'SA2</inf>=-1.41×log Sal+12.85. The concentration of Cu-HS in waters from the Mersey estuary and Liverpool bay was less than the overall ligand concentration ([Cu-HS]/L<inf>Cu</inf>=0.69±0.05) suggesting that a second ligand was of importance to Cu complexation. The concentration of Fe-HS was virtually equal to the total ligand concentration for Fe ([Fe-HS]/L<inf>Fe</inf>=0.95±0.16) confirming that humics are responsible for Fe complexation in these waters. The concentration of HS determined from Fe-HS was within 4% of that found from Cu-HS, confirming that the same substance is detected. The average complex stability (log K'<inf>Fe'L</inf>) was 11.2±0.1, the same as for log K'<inf>Fe'-SRHA</inf>. Copper additions demonstrated competition between Cu and Fe for the HS-type ligands. This competition was used to determine the complex stability for the Cu-HS species, giving a value of 10.6±0.4 for logK'<inf>Cu'HS</inf>, which is nearly a unit less than the complex stability, logK'<inf>Cu'L</inf>=11.4±0.2, found for all Cu ligands (the HS and the unknown ligand combined). The competition affects the complexation of both metals with HS-type ligands. Extrapolation of the concentration of Fe-HS to an ocean salinity of 35 gives a residual level of 0.05mgHSL<sup>-1</sup>, equivalent to an Fe-binding ligand concentration of 1.5nM. If HS-type ligands are confirmed to be ubiquitous in coastal or ocean waters, competition reactions could be of importance to the bioavailability of both metals to marine microorganisms.
463
Central serous chorioretinopathy: Towards an evidence-based treatment guideline
Central serous chorioretinopathy is a chorioretinal disease that causes idiopathic serous detachment of the retina, which is associated with one or more areas of leakage from the choroid through a defect in the retinal pigment epithelium outer blood-retina barrier."The majority of patients are men who have decreased and/or distorted vision together with altered colour appreciation, and CSC is generally associated with a decrease in the patient's quality of life.The age at onset for CSC can be as early as 7 years and as late as 83 years, with a peak at 40–50 years.CSC is relatively common, considered the fourth most common non-surgical retinopathy associated with fluid leakage, diabetic macular oedema, and retinal vein occlusion).Although the subretinal fluid can resolve spontaneously, many patients have significant clinical sequelae, including atrophy of the RPE or retina, and patients can also develop subretinal neovascularisation.The pathogenesis of CSC remains poorly understood; however, choroidal abnormalities are believed to be the primary underlying pathophysiology.These abnormalities can include choroidal thickening and hyperpermeability, together with increased hydrostatic pressure, which has been hypothesised to induce detachment of the RPE.These points of RPE detachment can remain isolated, but breakdown of the outer blood-retina barrier can also cause leakage of fluid into the subretinal space, resulting in active CSC.The chronic presence of SRF can ultimately damage the RPE, although in some cases the underlying multifocal choroidal vascular dysfunction can directly affect the RPE without the presence of SRF.Central serous chorioretinopathy was first described as ‘relapsing central luetic retinitis’ by Albrecht von Graefe more than 150 years ago.In the 1930s, Kitahara changed the name to central serous chorioretinitis, describing many of the clinical features associated with the disease and hypothesising that the disease occurs secondary to tuberculosis.At around the same time, Horniker called the condition ‘capillaro-spastic central retinitis’ and postulated that the disease has a vascular origin.In the 1940s, the condition was renamed ‘central serous retinopathy’ by Duke-Elder.At the time, the disease was believed to occur secondary to spasms of the retinal vessels, which was believed to cause a subretinal leakage of fluid.The majority of cases reported at that time were military recruits in World War II; therefore, most of the cases were young men.Even back then, there was a focus on the autonomic nervous system.For example, in 1955 Bennett noted from his review of the literature and his personal analysis of patients with CSC that ‘ … while admitting that certain individuals ‒ call them allergic, neurotic, endocrinopathic, vasculospastic, or what you will ‒ are peculiarly susceptible to an attack, we should not rule out an immediate essential cause, possibly infective.’,.Bennett also reported a high incidence of ‘stress diseases’ and history of stress-producing life situations, as well as a ‘tense obsessional mental make-up’ among affected patients.Maumenee used fluorescein angioscopy to obtain fundamental information regarding the pathophysiology of disease, finding that the condition is associated with leakage at the level of the RPE, not from the retinal vessels.The same group later suggested that a recently invented device ‒ the laser ‒ might be used to treat this leak.In a landmark paper, Gass outlined many of the modern ideas of what he called idiopathic CSC, proposing that increased permeability of the choriocapillaris causes increased hydrostatic pressure in the choroid.This increased hydrostatic pressure in the choroid and hyperpermeability of the choriocapillaris gives rise to pigment epithelial detachments and defects in the RPE monolayer, allowing fluid to leak under the neuroretina.This differs from neovascularisation, in which PEDs occur due to leakage from newly formed vessels.Although many alternate theories were proposed, the concept of choroidal hyperpermeability was confirmed decades later with the introduction of indocyanine green angiography and optical coherence tomography.A population-based study in Olmsted County, MN, USA found that the annual age-adjusted incidence of CSC from 1980 through 2002 was 9.9 and 1.7 per 100,000 in men and women, respectively, in a predominantly Caucasian population.A more balanced sex-based distribution was found in a population-base study from Taiwan, with an annual incidence of 54.5 men and 34.2 women per 100,000 corticosteroid users.A South Korean cohort study of corticosteroid users and non-users found that the total incidence of CSC was 5.4 men and 1.6 women per 10,000 person-years.These discrepancies in the reported incidence of CSC may be due to methodological and/or ethnic differences.Nevertheless, the reported incidences may have been underestimated, as Kitzmann et al. excluded patients without fluorescein angiography data, and Tsai et al. and Rim et al. based their studies on insurance claims data from nearly all nationwide claims submitted by healthcare providers in Taiwan and South Korea, respectively.No significant differences in incidence rates and disease spectrum have been reported in a retrospective analysis between 15 African American and 59 Caucasians CSC patients.In Asians, however, pachychoroid disease such as polypodal choroidal vasculopathy may be more prevalent than in Caucasians.Multimodal imaging is essential in order to accurately diagnose CSC.Using a combination of FA, ICGA, OCT, and fundus autofluorescence allows the practitioner to distinguish between CSC and other conditions with overlapping clinical features.Using OCT, the presence of SRF can be both assessed and quantified, which is generally considered useful for estimating the episode duration and for determining the subsequent treatment strategy.Moreover, FAF imaging can help estimate the duration of the CSC episode and the damage induced by CSC, and can also help determine the appropriate treatment strategy.The combination of OCT, FA, ICGA, and OCT angiography can be used to detect subretinal neovascularisation, which may be challenging to conclusively confirm.Several subtypes of CSC have been proposed, but these are still subject to debate, and there is currently no universally accepted classification system for CSC.This debate is based largely on the variable course of the disease and discrepancies with respect to the classification of CSC among ophthalmologists.Many authors use a basic distinction between acute CSC and chronic CSC based on the duration of SRF and the structural changes visible on multimodal imaging.Although the serous detachment in aCSC usually resolves within 3–4 months without the need for treatment, the detachment tends to persist in cCSC, and the chronic presence of SRF commonly leads to permanent structural damage in the neuroretina and RPE, with irreversible long-term vision loss.In the aCSC/cCSC classification system, aCSC usually presents with one ‒ or just a few ‒ focal leaks and produces an isolated dome-shaped neuroretinal elevation, with few atrophic changes in the RPE.In contrast, patients with cCSC can present with a large number of leaks, and the chronic leakage of SRF tends to produce a larger, less elevated neuroretinal detachment.However, some patients with CSC present with one or several leaks that last more than 4 months but are not associated with widespread RPE changes, a shallow detachment, or decreased visual acuity.It is therefore debatable whether this clinical subgroup should be classified as aCSC or cCSC.Given this wide clinical variability and overlap, progress towards a new classification system has been slow; however, reaching a consensus regarding the classification of CSC is an important first step towards better defining the disease subgroups and treatment endpoints.The subcategories that have been proposed include non-resolving CSC, recurrent CSC, and inactive CSC, as well as severe CSC based on multimodal imaging.Patients with a single point of leakage are considered to have focal leakage, whereas patients with several focal leakage points or ill-defined areas of dye leakage on FA can be categorised as having diffuse leakage.A focal leakage point on early FA typically increases in size with indistinct borders in the late phase of FA due to the leakage of fluorescein through the focal defect in the RPE.This focal area often co-localises with a dome-shaped RPE detachment and is presumed to be the point of least resistance at the RPE outer blood-retina barrier due to damage by increased wall stress induced by an increase in the vascular pressure gradient from the choriocapillaris.As a result of this small tear in the RPE or focal outer blood-retina barrier defect, fluid can flow from below the RPE into the subretinal space.It is important to create at least a basic distinction between the various clinical subtypes of CSC in order to define treatments, which can be used in study designs.In this review, we use the basic distinction between aCSC and cCSC, as this clinical distinction is the most widely used in the context of the natural history and treatment of CSC.Acute CSC is defined as an acute-onset, dome-shaped serous detachment of the neuroretina, with spontaneous complete resolution of the resulting SRF in 3–6 months together with a good visual prognosis.Patients with aCSC often present with altered vision and hypermetropisation.In a study involving 27 patients with CSC with an average follow-up of 23 months, SRF spontaneously resolved in all 27 patients within an average duration of follow-up of 3 months.In another study of 31 patients with aCSC, SRF completely resolved by 6 months of follow-up in 84% of patients.However, SRF has been reported to recur in up to 52% of patients.More importantly, even in patients who had SRF for only a short period of time, CSC can lead to irreversible damage to photoreceptors; thus, treatment may also be indicated in aCSC cases.Interestingly, some patients self-describe their disease duration as lasting only a few days, whereas fundus imaging may reveal evidence of prolonged disease; patient-reported disease duration may therefore be considered unreliable.Most studies reporting the spontaneous course of CSC were published before the availability of OCT, meaning that residual shallow detachments were difficult ‒ or impossible ‒ to identify at that time.Several risk factors for prolonged CSC duration have been identified at presentation, which may influence the decision regarding whether or not to treat.These risk factors include subfoveal choroidal thickness > 500 μm, PED height > 50 μm, presentation at 40 years of age or older, and photoreceptor atrophy of the detached retina together with granular debris in the SRF on OCT.Patients who present with aCSC with large amounts of SRF may be more prone to photoreceptor loss compared to patients who present with relatively small amounts of SRF.In aCSC, 1–3 focal leakage points are typically visible on FA.The classic features of aCSC on FA include a pinpoint hyperfluorescent RPE defect with an ascending area of hyperfluorescence over time, commonly referred to as a ‘smoke stack leakage’.This pattern of leakage can be caused by a mechanical disruption in the RPE with choroidal heat patterns and molecular differences between the fluorescein dye and the fluorescein albumin conjugate, combined with gravitational forces that give rise to this characteristic pattern of fluorescein dye in the subretinal space.More commonly, an ‘ink-blot’ pattern of leakage occurs, in which the focal leak that appears during dye transit becomes poorly defined, as the dye leaks more slowly into the subretinal space through the RPE defect.Patients who present with a smoke stack leakage on FA may have a larger serous detachment compared to patients with an ink-blot leakage, which can result in increased metamorphopsia.The location of the focal leakage point is usually correlated with a micro-tear in the RPE.In aCSC, these defects occur in the absence of diffuse atrophic changes in the RPE.In areas in which FA shows focal leakage, ICGA can reveal areas of choroidal vascular hyperpermeability, possibly depending on whether the pore size is large enough to allow the escape of indocyanine green‒bound plasma proteins.On the other hand, choroidal hyperpermeability does not always correspond to the hyperfluorescent area on FA.Indeed, the hyperfluorescent areas seen on ICGA are often more extensive than the hyperfluorescent areas on FA, which is believed to be due to the higher permeability of large choroidal vessels.Hypo-autofluorescent abnormalities on FAF have also been found to correlate with areas of leakage on FA, which may indicate the involvement of the RPE in the pathophysiology of CSC, as FAF reflects the structural and functional status of the RPE.The volume of SRF can be quantified using OCT, and higher SRF volume may be associated with poorer best-corrected visual acuity.The presence of subretinal hyperreflective dots on OCT ‒ which may represent macrophages that contain phagocytosed outer segments ‒ can migrate progressively into the neuroretina in patients with a prolonged disease course.However, subretinal hyperreflective dots can also represent plasma proteins from the choriocapillaris and inflammatory debris.OCT can reveal fibrin clots that result from fibrinogen leaking through a defect in the RPE.Although changes in choroidal haemodynamics have been observed in aCSC using laser speckle flowgraphy, subfoveal choroidal thickness does not appear to be correlated with the amount of SRF.In contrast, SRF resolution and BCVA in patients with aCSC appear to be related to macular choroidal blood flow velocity, with flow velocity decreasing as aCSC resolves.Non-resolving CSC has been described as a variant of aCSC in which SRF persists for more than 4 months without atrophic RPE abnormalities.Moreover, recurrent CSC has been defined as an aCSC episode followed by one or more episodes after complete SRF resolution.Chronic CSC is characterised by serous detachment of the retina, with either small or more extensive areas of serous detachment of the RPE, together with atrophic changes to the outer retina and RPE developing secondary to choroidal vasculopathy.On FA, one or more focal leakage points can be visible; alternatively, distinct points of leakage can be absent or difficult to identify against a background of irregular RPE translucency.Patients with cCSC typically have persistent serous detachment on OCT for longer than 4–6 months.Eyes with cCSC often have widespread ICGA abnormalities, including delayed choroidal filling, dilated choroidal veins, and/or choroidal vascular hyperpermeability.Relatively few patients with cCSC have a history of aCSC, which may indicate significant clinical differences between aCSC and cCSC.Interestingly, however, aCSC and cCSC share several genetic risk factors and possible pathophysiological overlap, particularly given similarities with respect to multimodal imaging.In this respect, it is interesting to note that a retrospective study found that 50% of unspecified CSC patients developed atrophic changes in the RPE within 12 years of presentation.No marked clinical differences have been reported between cCSC patients with focal leakage and those diffuse leakage on FA, which may indicate that the choroid is the primary involved structure both in cCSC patients with focal and with diffuse leakage.Diffuse atrophic changes in the RPE and atrophic tracts may be the result of previous CSC episodes and the prolonged presence of SRF under the serous neuroretinal detachment, or it may be the result of an underlying choroidal dysfunction that directly affects the RPE, for example as seen in pachychoroid pigment epitheliopathy.The term gravitational tract is used to describe areas of RPE and photoreceptor outer segment atrophy, hyperfluorescence on FA, and mixed hyperautofluorescent and hypo-autofluorescent changes on FAF, which extend inferiorly of the prominent points of leakage.These tracts occur passively due to prolonged leakage and should not necessarily be targeted for treatment.The location of the accumulated SRF may be linked to the hyperfluorescent area on OCT, and granular hypo-autofluorescence due to RPE atrophy may be present on FAF.The progression of the autofluorescence patterns in cCSC is slow, taking an average of 24 months for the granular hypo-autofluorescent changes to progress to a confluent pattern of hypo-autofluorescence.When outer segment debris persists in the subretinal space, it becomes increasingly hyperautofluorescent.In cases of cCSC with more marked and/or extensive atrophic changes in the RPE, patients often do not present with a dome-shaped PED; rather, these patients present with a shallow, broader PED that ‒ in some cases ‒ can have an underlying neovascular component."This neovascular component should be suspected in cases in which the space between the shallow PED and Bruch's membrane on OCT contains mid-reflective ‒ presumably neovascular – material rather than being hyporeflective, which is more suggestive of sub-RPE fluid.En face swept-source OCT and OCT angiography can be useful in identifying choroidal neovascularisation without the use of conventional angiography.Some cases of cCSC can be complicated by the accumulation of cystoid fluid, giving rise to a complication called posterior cystoid retinal degeneration, in which the cystoid changes do not necessarily involve the central macula, as they are typically extrafoveal at various locations in the posterior pole.Importantly, PCRD has been reported to cause a severe loss of central vision in some cases of CSC.The cystoid intraretinal spaces can be seen on OCT, but unlike typical cystoid macular oedema they do not stain on FA.PCRD is associated with cCSC symptoms that persist longer than 5 years.Foveal damage and vision loss can occur due to the intraretinal fluid itself, as well as the associated foveal detachment.In a study of 34 eyes with cCSC and PCRD, Cardillo Piccolino and colleagues found that visual acuity ranged from 20/20 to 20/400, with visual acuity of 20/40 or better in eyes in which the intraretinal fluid spared the foveal centre.Using OCT angiography, Sahoo and colleagues detected CNV in nearly half of the cases with cystoid macular degeneration.Patients with cCSC often experience a gradual decline in BCVA and contrast sensitivity due to damage to macular photoreceptors; approximately 13% of these eyes progress to legal blindness, reaching a BCVA of 20/200 or worse after 10 years.This marked loss of visual acuity can be due to atrophic RPE changes at the central fovea together with photoreceptor damage, cystoid macular degeneration, and/or secondary CNV.Descending tracts are more frequent in cCSC compared to aCSC.Although visual symptoms in cCSC usually present in only one eye, up to 42% of patients with cCSC show signs of bilateral abnormalities on FA.Bilateral CSC is relatively more common in patients of 50 years or older, with a prevalence of 50% in this age group compared to 28% in patients under the age of 50.Moreover, bilateral disease activity together with bilateral SRF accumulation is more common in cases with severe cCSC, affecting up to 84% of these patients.These patients with bilateral severe cCSC are highly prone to develop severe visual impairment.A rare yet severe manifestation of cCSC caused by many vigorous leaks is bullous retinal detachment, which commonly presents with the significant accumulation of subretinal fibrin.In some cases, bullous retinal detachment is accompanied by complete disruption of the edges of a PED, thereby producing an avulsion in the RPE.On average, men are 2.7–8 times more likely to develop CSC compared to women.The most important external risk factor for developing CSC is corticosteroid use, with an associated odds ratio of up to 37 to 1.However, the precise effect of corticosteroid use on CSC risk is unclear, as lower odds ratios ‒ in some cases, corresponding with only a slightly increased risk ‒ have been reported in patients who use corticosteroids.An increase in choroidal thickness and features of CSC have been reported in 1 out of 18 patients after high-dose corticosteroid treatment.In rare cases, even minimal exposure to corticosteroids via intranasal, inhalation, or extraocular application has been associated with an increased risk of CSC.In 1987, Yannuzzi reported an association between CSC and type A behaviour, which has personality traits that include an intense, sustained drive to achieve self-selected goals and an eagerness to compete, along with a desire for recognition and advancement.Additional components that have been reported as being part of the ‘CSC patient profile’ include impulsiveness, a drive to overachieve, emotional instability, and hard-driving competitiveness, all of which have been hypothesised to affect the risk of CSC.A stressful life event, shift work, poor sleep quality, and disturbances in the circadian rhythm have also been associated with an increased risk of CSC.Interestingly, individuals with type A behaviour are believed to have increased levels of corticosteroids and catecholamines, which may underlie their potentially increased risk of developing CSC.Moreover, many studies described an association between CSC risk and both stress and certain personality traits.In contrast, a recent study involving 86 patients with cCSC found that the prevalence of maladaptive personality traits was similar between patients and a reference population.Various coping strategies have also been associated with CSC, and elevated psychological stress has been reported in CSC patients within a few weeks following the onset of ocular symptoms.Moreover, psychosocial status has been correlated to the phase and subtype of CSC, with CSC patients having a lower quality of life, more psychological problems, and higher anxiety compared to healthy controls.A history of psychiatric illness has also been associated with an increased risk of recurrence in CSC cases.Nevertheless, quantifying and qualifying stress ‒ and its association with CSC ‒ will likely require large systematic studies including detailed psychometric assessments using suitable, validated questionnaires."Endogenous hypercortisolism has also been reported to increase the risk of developing CSC. "In addition, several studies found increased levels of cortisol in the serum of patients with CSC patients, albeit without meeting the diagnostic criteria for Cushing's syndrome. "CSC can be a presenting symptom in Cushing's syndrome, and SRF was reported to resolve in patients following surgery for treating Cushing's syndrome. "In their endocrinological work-up of 86 patients with cCSC, Van Haalen and colleagues found elevated 24-h urinary free cortisol levels, indicating increased activity of the hypothalamic-pituitary-adrenal axis; however, none of the patients in their study met either the clinical or biochemical criteria for Cushing's syndrome.Pregnancy has also been associated with an increased risk of CSC along with hypertensive and vascular disorders.This increased risk of CSC during pregnancy may be caused by hormonal changes that can induce vascular changes in the choroid.Although choroidal thickness does not appear to change during a healthy pregnancy, choroidal thickness can be increased in preeclampsia, and associated hypertension may also affect choroidal circulation.Choroidal hyperpermeability and stasis in the choroidal vessels, which may occur during preeclampsia, may also play a role in the development of CSC during pregnancy.Patients in need of treatment with mitogen-activated protein kinase inhibitors may develop a serous retinal detachment due to toxicity or autoantibodies.These cases have been referred to as MEK inhibitor associated serous retinopathy.In contrast to CSC, no choroidal hyperpermeability is visible on ICGA in these patients, there is no increase in choroidal thickness, and no PEDs or focal leakage on FA are present.Between 20 and 65% of patients treated with MEK inhibitors may develop a serous retinopathy, with only a minority of these patients developing mild symptoms, which are usually transient, so discontinuation of this treatment for this reason is generally not required.Other risk factors associated with CSC include gastro-oesophageal disorders such as Helicobacter pylori infection, uncontrolled systemic hypertension, antibiotics, alcohol, allergic respiratory disease, high socioeconomic status, alcohol consumption, smoking, coronary heart disease, obstructive sleep apnoea, poor sleep quality, autoimmune disease, and hyperopia; in contrast, myopia was found to protect from CSC.With respect to cardiovascular disease, the pathogenic mechanism for CSC may lie in general endothelial cell dysfunction.Some studies reported a familial predisposition for CSC, which suggests that CSC may have a genetic component.Recently, several single nucleotide polymorphisms were associated with an increased risk of CSC.Some of these SNPs are located in genes involved in the complement system, including CFH, which encodes complement factor H, the C4B, which encodes complement factor 4B, and the NR3C2 gene, which encodes nuclear receptor subfamily 3 group C member 2, a mineralocorticoid receptor.In addition, CSC has been associated with the genes that encode age-related macular degeneration susceptibility 2, cadherin 5, vasoactive intestinal peptide receptor 2, and solute carrier family 7 member 5.Interestingly, a familial form of pachychoroid, possibly with an autosomal dominant inheritance pattern, has also been described, as well as an association with variants in the CFH and VIPR2 genes in an Asian cohort.For a more detailed discussion regarding this topic, the reader is referred to Kaye et al.Progress in Retinal and eye Research 2019.If untreated, 43–51% of patients with aCSC experience at least one recurrence.In patients with untreated cCSC, the reported 1-year recurrence rate is 30–52%.Several risk factors have been identified for CSC recurrence and disease progression, including the use of corticosteroids, untreated hypertension, a thick subfoveal choroid, non-intense hyperfluorescence on FA, and shift work.Moreover, depression and anxiety disorders have been associated with an increased risk of recurrence in both aCSC and cCSC."Severe cCSC tends to be progressive, although treatment can slow the disease's progression and stabilise BCVA.Interestingly, few patients who present with cCSC have a history of aCSC, which may indicate that in addition to having a different visual prognosis, different underlying disease mechanisms are likely involved in the aetiology and progression of the acute and chronic forms of the disease.Based on clinical evidence and FA findings, Gass suggested back in 1967 that hyperpermeability and increased hydrostatic pressure in the choroid may induce damage to the RPE, subsequently giving rise to either a PED or SRF leakage through a defect in the RPE outer blood-retina barrier.The presence of choroidal hyperfluorescence on ICGA supports the hypothesis that choroidal dysfunction is the primary underlying pathogenic mechanism in CSC.Other changes in the choroid further support the notion that abnormalities in choroidal structure and function play a fundamental role in the development of CSC; these changes include increased choroidal thickness, which can decrease after treatment, dilated veins in the Haller layer, atrophy of inner choroidal layers, increased choroidal vascularity index, and dysregulation of choroidal blood flow.Pathological processes that contribute to the observed choroidal abnormalities can include choroidal stasis, ischaemia, autonomic dysregulation, inflammation, and abnormalities in the complement system.However, classic inflammation within the choroid does not likely play a role in CSC, as corticosteroids can induce or worsen the disease.The above-mentioned pathological processes can lead to damage of the RPE outer blood-retina barrier and RPE alterations including serous PED, hyperplasia, and atrophy, which can be detected on FA and FAF.This hypothesis is supported by findings on OCT angiography, including increased signal intensity and thicker choriocapillaris vasculature.The choroidal thickness has been reported to vary over the day, which may lead to diurnal fluctuations in the amount of SRF that is present in CSC.Choriocapillary hypoperfusion has also been detected on OCT angiography in CSC cases, and this reduced perfusion may result in ischaemia in adjacent retinal tissues due to insufficient oxygen delivery.This focal choriocapillary ischaemia ‒ combined with adjacent hyperperfusion ‒ can result in SRF leakage.Choroidal vascular dysfunction is a key feature in theories explaining the pathophysiology of CSC, with RPE alterations being secondary to choroidal changes.The RPE plays an important role in the pathophysiology of CSC.Focal areas of leakage through RPE were hypothesised to underlie the accumulation of SRF in a study by Negi and Marmor who suggested that defects in the RPE lead to an outflow of SRF to the choroid.However, as described in section 1.3.1, there is overwhelming evidence that defects in the RPE are presumably secondary to choroidal dysfunction, as the choroidal abnormalities are more extensive than ‒ or at least as extensive as ‒ the RPE abnormalities, and choroidal dysfunction has been well-described using ICGA, structural OCT, and OCT angiography.Interestingly, RPE abnormalities can also be present in the unaffected eye in patients with unilateral CSC, despite an absence of SRF.Atrophy of the RPE is associated with a reduced choroidal permeability, seen as hypofluorescence on ICGA.This can be the result of progressive quiescence of the choriocapillaris after a long-lasting disease and chronic RPE atrophy, as the secretion of vascular endothelial growth factor from the RPE is required in order to maintain the normal structure and homeostasis of the choriocapillaris.The resulting increased hydrostatic pressure in the choroid may lead to reduced RPE barrier function, resulting in an accumulation of SRF.This hypothesis is supported by findings following photodynamic therapy, measured using both ICGA and enhanced depth imaging OCT.Apparently, secondary damage to the RPE can range from small focal lesions to extensive degeneration, which is sometimes referred to as either diffuse retinal pigment epitheliopathy or diffuse atrophic RPE alterations.An alternative theory to explain the pathogenesis of CSC posits that a focal loss of polarity of the RPE cells induces the active transport of SRF to the subretinal space.CSC is considered part of the pachychoroid disease spectrum.This spectrum encompasses several disease entities, all of which have common features that include a diffuse or focal increase in choroidal thickness, atrophy of the inner choroidal layers, dilated outer choroidal veins, and choroidal vascular hyperpermeability on ICGA.According to the pachychoroid disease hypothesis, disease progression can occur in multiple stages, yet many patients presumably never progress from the earlier stages to symptomatic advanced disease with visual impairment.In the earliest stage of the disease, uncomplicated pachychoroid, choroidal changes, and thickening of the choroid are present without visible RPE and/or neuroretinal changes, but the patient does not present with visual symptoms.In the second stage, referred to as pachychoroid pigment epitheliopathy, mild changes in the RPE appear.In the third stage of pachychoroid disease progression, CSC, SRF leakage causes serous neuroretinal detachment, presumably resulting from an acutely or chronically dysfunctional outer blood-retina barrier due to underlying choroidal thickening, congestion, and dysfunction.The fourth stage in the pachychoroid spectrum is pachychoroid neovasculopathy, which can include a polypoidal vasculopathy component.Patients with pachychoroid neovasculopathy ‒ either with or without a polypoidal component ‒ can present with serous SRF without having a history of CSC.It should be noted the term ‘pachychoroid’ literally means ‘thickened choroid’, and is therefore rather non-specific.Whether or not a choroid can be considered thickened is subject to debate and can depend strongly on a variety of factors such as refractive error, associated axial length, and time of day.Many patients with a relatively thickened choroid will never develop clinically relevant abnormalities such as pachychoroid pigment epitheliopathy or CSC.Conversely, some patients develop typical CSC despite having choroidal thickness within the normal range.Most CSC patients, however, have a significantly increased choroidal thickness in the affected eye, with only 5 out of 28 unspecified CSC eyes in a retrospective study having a choroidal thickness below 400 μm.Pachychoroid is associated with hyperopia, and CSC is extremely rare in myopic patients; however, typical CSC can still also occur in emmetropic ‒ and even myopic ‒ patients with choroidal thickness within the ‘normal’ range, if the choroid is relatively thickened and dysfunctional.Therefore, a thickened choroid is an important risk factor for CSC, but the actual dysfunctional, congestive, ‘leaky’ properties of such a choroid may be at least as important in the actual disease progression within the pachychoroid spectrum.The differential diagnosis for CSC encompasses a broad range of disease categories that should be taken into account when confronted with serous neuroretinal detachment or a clinical picture suggestive of such a detachment.The most common diseases in the differential diagnosis of CSC include diseases associated with macular neovascularisation, such as AMD and polypoidal choroidal vasculopathy.In order to differentiate between these diseases and CSC, one should obtain OCT, OCT angiography, FA, and ICGA imaging.Retinal drusen are a distinctive feature of AMD, while polypoidal lesions on OCT, OCT angiography, FA, and especially ICGA are typical for polypoidal choroidal vasculopathy.Other diseases in the differential diagnosis of CSC include inflammatory ocular diseases, ocular tumours, haematological diseases, genetic retinal diseases, ocular developmental anomalies, and medication-induced disease.An overview of these diseases is given in Table 1.An in-depth discussion of these differential diagnoses is beyond the scope of this review; therefore, the reader is referred to Kaye et al.Progress in Retinal and Eye Research 2019.Defining an optimal treatment for CSC is complicated by the broad range of disease presentations and clinical course, as well as the poorly understood pathophysiology of CSC, and lack of consensus on a classification system.Because of the relatively favourable visual prognosis for patients with CSC, the preferred treatment modalities should have a favourable safety profile.Most studies published to date analysed retrospective data and varied with respect to their inclusion and exclusion criteria, clinical definitions, and study endpoints.The only large, prospective multicentre randomised controlled treatment trial for the treatment of cCSC conducted to date is the PLACE trial.This trial compared differences in percentage of patients with complete resolution of SRF, BCVA, retinal sensitivity on microperimetry, and in the 25-item National Eye Institute Visual Function Questionnaire score between cCSC patients treated with either half-dose PDT or HSML.Additional large, prospective, randomised controlled trials performed over a defined treatment period are particularly important for CSC, given the relatively high likelihood of either spontaneous improvement or resolution of the serous neuroretinal detachment.If the study design is not appropriate ‒ in particular, lacking a suitable control group ‒ spontaneous improvement may cause the researcher to erroneously conclude that the treatment was effective.Given the range of interventions used for treating CSC, it should be obvious that the high rate of spontaneous improvement in CSC may explain the fact that non-systematic, non-prospective, non-randomised testing of a wide range of interventions has yielded many promising findings that have never been replicated satisfactorily.The aim of treatment for CSC is to preserve the outer neurosensory retinal layers and achieve complete resolution of the serous neuroretinal detachment and the underlying SRF, as even a small amount of remaining SRF can lead to irreversible damage to the photoreceptors.It is therefore commonly accepted that complete elimination of SRF, in order to restore normal anatomical and functional photoreceptor-RPE interaction, should be the principal surrogate endpoint in intervention trials regarding CSC.Following restoration of the photoreceptor-RPE anatomy, visual symptoms usually decrease gradually and BCVA improves.Even with anatomically successful treatment, the persistence of visual sequelae due to pre-existing irreversible retinal damage is relatively common; therefore, a meticulous clinical history should be obtained from the patient prior to treatment.These persistent visual symptoms can include suboptimal visual acuity, metamorphopsia, and loss of contrast and/or colour vision.Another important aim of treatment is to prevent recurrences and subsequent disease progression.Although an important question is whether the risk of recurrence is associated with any particular treatment, insufficient evidence is currently available.In aCSC, which has a relatively high rate of spontaneous resolution, an effective treatment should ideally prevent recurrences and subsequent disease progression.In cCSC, the primary aim of treatment is currently to achieve ‒ and maintain ‒ the complete resolution of SRF and intraretinal fluid."In addition, other factors such as subjective symptoms, the patient's age, and the patient's professional dependence on high visual acuity, may be taken into account.Young patients with CSC generally have a higher cumulative lifetime risk of recurrence compared to older patients, given their longer life expectancy.On the other hand, older patients with CSC have a higher risk of developing neovascularisation and/or polypoidal choroidal vasculopathy.While complete resolution of SRF should be the principal surrogate endpoint for trials on CSC, this may not be the case in AMD patients with accompanying SRF.In AMD patients with SRF, the BCVA may be relatively preserved and complete resolution of the SRF may not be required to still maintain a relatively favourable BCVA.However, CSC and AMD are different disease entities with potentially different types of and rates of leakage and a different composition of the SRF.Prolonged SRF can lead to irreversible damage to the photoreceptors, and a subgroup of cCSC patients can still have a significantly affected vision-related quality of life due to vision loss in progressive disease, and may even become legally blind.Regardless of the subtype of CSC, it is important to identify whether the use of corticosteroids or the presence of other risk factors is associated with CSC.Thus, patients may be advised to discontinue the use of all forms of corticosteroids provided that their general health permits."Patients with CSC should be referred to an endocrinologist if they present with symptoms indicative of Cushing's syndrome, including facial rounding, truncal obesity, and the presence of a dorsal fat pad. "In this respect, it is important to be aware that the signs and symptoms associated with Cushing's disease can be very subtle, and CSC can even be a presenting feature of this disease.Whether the patient is ‒ or might be ‒ pregnant should be discussed with women of childbearing age who present with CSC.The possibility of eradication of Helicobacter pylori infection is described in paragraph 2.3.7.6.Reduction of emotional stress, treating anxiety, a healthy diet, and enough sleep may be advised, although there is no strong evidence of positive effects with regard to CSC in this respect.Traditionally, the treatment of choice for CSC has been focal continuous-wave thermal laser treatment, typically with an argon or diode laser, but also with a krypton or xenon laser; with the diode laser being superior to argon laser in terms of BCVA outcome.This method of laser treatment targets the focal leakage point measured on FA and attempts to close the focal defect in the outer blood-retina barrier by applying photocoagulation to the affected area of the RPE.Laser photocoagulation should be limited to extrafoveal leakage sites, as vision loss, scotoma, reduced contrast sensitivity, and/or CNV can occur at the treated area.Although thermal laser treatment can reduce the duration of SRF, the final BCVA does not differ significantly compared to no treatment.Laser coagulation treatment has been shown to reduce the prevalence of SRF recurrence to 0 out of 29 treated eyes; moreover, treatment reduced the time until complete resolution to an average of 1 month.It should be noted that the 1997 study by Burumcek et al. was conducted before OCT became available.Recently, navigated laser photocoagulation was suggested as a safe and effective laser modality for treating CSC.Navigated laser photocoagulation integrates the information obtained using fundus photography and FA in order to identify the area to be treated; photocoagulation with a 532-nm laser is then performed automatically by computer at the marked area.Although navigated laser photocoagulation of the focal leakage point on FA achieves complete resolution of SRF in 75–94% of patients with cCSC, functional outcome with respect to BCVA is inconsistent.A long-term prospective randomised trial comparing conventional argon laser photocoagulation with no treatment found no difference between the two groups with respect to recurrence rate, visual acuity, or Farnsworth-Munsell 100-hue test outcome.Adverse events reported with laser photocoagulation include CNV at the treatment site.Finally, using photocoagulation to treat CSC does not change subfoveal choroidal thickness.At this point it is difficult to decide if treatment with photocoagulation is warranted in CSC.In the field of ophthalmology, transpupillary thermotherapy was first described for treating choroidal melanoma.The goal of TTT is to induce a mild increase in temperature specifically in the area to be treated.This increase in temperature may activate a cascade of reactions that presumably involve the production of heat shock proteins that help to repair the damaged RPE cells and may also lead to choroidal vascular thrombosis.Several techniques have been developed for inducing ocular hyperthermia, including the use of microwave radiation, localised current fields, ultrasound, and thermoseeds.The precise mechanism by which TTT is effective in treating CSC is unclear, but it may involve the induction of apoptosis in endothelial cells and/or vascular thrombosis, which may be useful for treating the underlying choroidal abnormalities in this disease.In CSC, TTT can be performed using an 810-nm pulse diode laser and for this disease it requires a shorter treatment duration compared to the treatment of choroidal melanomas, as CSC does not involve active proliferation of the choroid.In a study by Hussain et al., 79% of patients with cCSC had a complete resolution of SRF three months after treatment, and 53% of treated eyes had an improvement of ≥3 lines of visual acuity.In a case-control study involving 25 patients who received TTT and 15 observed patients, all of whom had a subfoveal leak, 96% of the treated patients had complete resolution of SRF within 3 months compared to only 53% of control-treated patients.However, one eye in the treated group developed subfoveal CNV.Manayath and colleagues performed ‘graded’ subthreshold TTT in 10 eyes with cCSC, initially using 60% of the threshold power; if the SRF persisted at 1 month, the power was increased to 80% of threshold for a second treatment session.Using this protocol, the authors found that 8 of the 10 treated eyes had complete resolution of SRF on OCT and 5 of eyes had an improvement of BCVA by ≥ 3 lines.In a prospective study involving 25 patients with cCSC, Mathur and colleagues found that 52% of patients had complete resolution of SRF at 3 months after TTT.In another study involving 5 patients who were treated with ICGA-guided TTT, complete resolution of SRF occurred in 2 patients at the 12-month follow-up visit.In addition, Kawamura and colleagues studied 8 patients who had severe CSC together with bullous retinal detachment, several diffuse leakage spots, or fibrin formation and found complete resolution of SRF in 5 patients within 1 month of receiving TTT treatment.Manayath and colleagues studied 22 patients with cCSC who declined to undergo PDT and therefore underwent TTT.The authors found a significant reduction in mean foveal thickness, but no significant difference in BCVA between patients who underwent TTT and patients who underwent PDT; interestingly, however, the patients who underwent TTT required more treatment sessions and had a longer interval until complete resolution of SRF compared to patients who underwent PDT.Finally, Russo and colleagues performed a prospective, randomised interventional pilot study involving 20 patients with cCSC who received TTT with a 689-nm laser at an intensity of 805 mW/m2 for 118 s; all 20 patients had complete resolution of SRF when assessed 10 months after treatment.On rare occasions, side effects such as macular infarction may occur following TTT.Therefore, additional prospective randomised controlled trials are warranted in order to evaluate further the efficacy and safely of using TTT to treat CSC.The use of a micropulse diode laser can induce more subtle effects in the outer retina compared to laser photocoagulation.Importantly, at the appropriate dose micropulse laser treatment can selectively target the RPE while preserving the photoreceptors and without causing visible tissue damage.Micropulse laser was first suggested as a viable option for treating macular oedema after retinal venous occlusion, and in patients with diabetic retinopathy.The first papers describing the use of subthreshold micropulse laser for CSC were published a decade later.However, the mechanism of action underlying micropulse laser treatment is poorly understood, and large prospective, randomised controlled trials regarding micropulse laser treatment have not been performed, with the exception of the PLACE trial, which compared subthreshold micropulse laser treatment to half-dose PDT in patients with cCSC.With subthreshold micropulse laser, photonic radiation is delivered to the retina in pulses lasting 0.1–0.5 s, each consisting of a ‘train’ of brief laser pulses.This approach allows for the dissipation of heat between pulses and minimises collateral damage; thus, the temperature stays below the threshold for denaturing cellular proteins, and no laser burns are induced.Therefore, the subthreshold laser technique does not have any visible effects on the retina.With high-density subthreshold micropulse laser treatment for CSC, the laser spots are targeted to the hyperfluorescent abnormalities on ICGA in a densely packed pattern, with adjacent non-overlapping spots focused on the designated treatment area.The radiation is absorbed by chromophores in the RPE ‒ primarily melanin ‒ and is dissipated as heat.When applied in a sub-lethal dose, the treatment is believed to increase the expression of heat shock proteins, which may restore cellular function in the RPE."Although no histopathological differences have been observed between micropulse laser application using 810 nm light compared to 532 nm light when measured in rabbits, the treatment's effects appear to differ between RPE cells of various sizes, shape, and pigmentation types.Several micropulse laser types and strategies have been investigated in interventional studies involving CSC, as summarised in Table 2.The wavelengths that have been used in micropulse laser treatment for CSC include 810 nm, 577 nm, 532 nm, and 527 nm, and other adjustable laser settings include the duty cycle, power, spot size, and pulse duration.The duty cycle is defined as the ratio between the ‘ON’ time and the total treatment time and ranges from 5% to 15% in various studies involving CSC patients.The power setting of the micropulse laser determines the intensity of the laser and ranges from 90 mW to 1800 mW in published studies.The spot size refers to the size of each individual micropulse laser treatment spot and ranges between 100 μm and 200 μm.Pulse duration is the time interval between each new pulse cycle and ranges from 100 ms to 300 ms.Thus, to achieve a duty cycle of 5–15% with a pulse duration of 200 ms divided into 100 micropulses, the ‘ON’ time of the micropulse laser per 2 ms micropulse will be 0.1–0.3 ms.Theoretically, the energy can be delivered to the retina with more precision using a smaller spot size.The combination of various settings determines the ‘dose’ delivered to the retina, and this dose should be high enough to achieve a therapeutic effect, but should not be so high that RPE or neuroretinal damage is induced.To date, no large prospective randomised controlled trials have been performed to compare various micropulse laser protocols.To complicate the analysis further, many settings vary between protocols, including the laser wavelength, making it difficult to compare the results of different treatment studies and determine the feasibility of using HSML for CSC.Several outcomes have been evaluated following HSML treatment, including retinal thickness, choroidal thickness, resolution of SRF, decrease in SRF height on OCT, retinal sensitivity on microperimetry, BCVA, and adverse events.Overall, 36–100% of patients with cCSC had complete resolution of SRF after HSML treatment based on retrospective studies and case series.Scholz et al. used ICGA-guided 577 nm HSML treatment in 42 eyes and found that a second treatment was required in 41% of cases at the 6-week follow-up visit.Although the authors did not report the percentage of patients who achieved complete resolution of SRF after this second HSML treatment, 74% of patients had a decrease in central retinal thickness of ≥20 μm after the second treatment.The mean follow-up time of 26 eyes with cCSC in a study by Chen and colleagues was 9.5 months, and the authors performed up to three FA-guided subthreshold micropulse laser treatments using an 810 nm laser; they found that 13 out of 26 patients achieved complete resolution of SRF after these micropulse laser treatments.In the PLACE trial, the only large prospective, multicentre randomised controlled treatment trial studying HSML in cCSC conducted to date, complete resolution of SRF was achieved in only 14% and 29% of cases at 2 and 7–8 months, respectively, in the HSML group.The rates of SRF resolution in the PLACE trial are lower than those reported previously by retrospective studies and smaller prospective studies regarding HSML in cCSC.This difference in outcome may be due to the retrospective nature and relatively small sample sizes of the previous studies, as well as possible differences in inclusion and/or exclusion criteria.Micropulse laser treatment may be more effective in cCSC eyes with focal leakage compared to eyes with diffuse leakage.According to data from a PLACE trial subgroup consisting of 79 HSML-treated patients with cCSC with either focal or diffuse leakage on FA, 41% and 21% of patients with focal or diffuse leakage, respectively, had complete resolution of SRF at 7–8 months.These findings suggests that HSML may be more effective in cCSC with focal leakage on FA.Nevertheless, a significantly higher percentage of patients with cCSC who were treated with half-dose PDT had complete resolution of SRF compared to the HSML-treated group, with rates of 75% versus 41%, respectively, among patients with focal leakage and 57% versus 21%, respectively, among patients with diffuse leakage.It has been suggested that performing HSML treatment directly after intravenous administration of indocyanine green may increase the selectivity for RPE cells.During this procedure, patients receive an intravenous injection of 25 mg indocyanine green dissolved in 2 mL of a 5% glucose solution.After a brief waiting period of 15–20 min, patients undergo HSML treatment with an 810 nm laser.In a small prospective case series, 5 out of 7 patients with cCSC had complete resolution of SRF, and the amount of SRF was reduced in the other two patients within 8 weeks following treatment.In summary, the efficacy of subthreshold micropulse laser treatment for cCSC can be improved by standardising the laser settings and understanding the mechanism of action, and additional prospective randomised clinical trials will help determine its feasibility as a treatment modality for CSC.Although PDT was originally developed as a treatment for skin cancer, subsequent improvements in lasers and powerful light sources eventually paved the way for its introduction in the field of ophthalmology.In PDT applications in ophthalmology, the benzoporphyrin derivate verteporfin is currently approved for use in treating retinal disease, as it has a high affinity for the RPE.The high lysosomal activity in the RPE can lead to the binding of verteporfin to plasma low density lipoproteins, which bind to surface receptors of the cell membrane of vascular and reticuloendothelial cells.However, compared to photocoagulation, the PDT-induced effects to the RPE are far less destructive.The treatment effect of PDT in CSC is presumably based on the formation of free radicals upon illumination of the treatment site ‒ specifically, the choriocapillaris ‒ which leads to damage to the vascular endothelium and hypoperfusion, and subsequent remodelling of the vessels in the capillary bed underlying the damaged RPE."Because of the treatment's high selectivity, retinal photoreceptors are spared.In ophthalmology, PDT was originally developed for treating CNV secondary to AMD.After it was approved for use in treating AMD, verteporfin was soon used off-label in PDT for treating CSC, particularly cCSC.The studies that were performed to evaluate PDT in 50 or more patients with CSC are summarised in Table 3.Yannuzzi and colleagues were among the first groups to report PDT as a possible treatment strategy for cCSC.In the initial reports, verteporfin was used at the same dose as for neovascular AMD.Later, however, several reduced-intensity PDT regimens such as half-dose, half-fluence, and half-time PDT were developed in order to avoid a possible complication of profound angiographic closure that has been reported ‒ albeit rarely ‒ following PDT for neovascular AMD,.Choroidal thickness can transiently increase immediately following PDT treatment for CSC.In one study, mean choroidal thickness increased to 119% of pre-treatment thickness in 8 eyes at 2 days after treatment.This transient effect on choroidal thickness can be accompanied by a transient increase in the height of the serous neuroretinal detachment, and increased visual symptoms have been reported in up to 38% of treated patients measured up to 4 weeks after treatment.Changes in choroidal thickness and SRF height typically decrease within 1 week of treatment and stabilise at 1 month, and are often accompanied by a resolution of SRF, gradually improving visual acuity, and reduced visual symptoms compared to pre-treatment levels.After PDT treatment for unilateral CSC, choroidal thickness in the treated eye can decrease to the same choroidal thickness value as in the unaffected eye, resulting in no significant difference in choroidal thickness between the two eyes.This finding suggests that PDT reduces the choroidal vascular hyperpermeability and thickening that play a key role in the pathogenesis of CSC.The efficacy of using PDT with verteporfin relies upon the proper selection of the target area to be irradiated with a circular spot of light.A common selection strategy is to set the centre and diameter of this spot so that it covers the area of hyperfluorescent abnormalities on mid-phase ICGA and the corresponding point of leakage on FA and OCT, as this is the apparent point of origin for the SRF.In preparation for the procedure, the pupil of the eye to be treated is first dilated using a mydriatic agent.Subsequently, either 6 mg/m2 or 3 mg/m2 verteporfin is delivered via an intravenous infusion over a 10-min time course.Within 10–15 min after the start of the verteporfin infusion, an anaesthetic eye drop is administered to the eye to be treated, and a contact lens ‒ typically, a 1.6x magnification PDT lens ‒ is positioned on the cornea.Light at 689 nm is then applied to the area to be treated at a fluence of 50 J/cm2 for 83 s. Alternatively, half-fluence PDT can be used, with a full dose of verteporfin and full treatment duration.A final option is half-time PDT, which uses full-dose verteporfin, full fluence, and a treatment duration of 42 s. Patients should be advised to avoid exposure to direct sunlight and other sources of UV radiation for 48 h after receiving half-dose PDT, and this period of time should be increased or decreased accordingly based on the verteporfin dose.Half-dose PDT has been reported to be as effective as ‒ or superior to ‒ full-dose, half-fluence, and half-time PDT regimens with respect to both aCSC and cCSC.Because of the reduced risk of systemic side effects,half-dose PDT is preferred in some treatment centres.Most studies of PDT for CSC involved patients with cCSC, in which the spontaneous resolution of SRF is less common than in aCSC.Alkin and colleagues found no significant difference in efficacy between half-dose PDT and half-fluence PDT in treating cCSC.In contrast, Nicolo and colleagues reported that half-dose PDT group led to complete resolution more rapidly than half-fluence PDT, with 86% and 61% of patients with cCSC, respectively, reaching complete resolution of SRF at the 1 month follow-up point.In other studies, half-fluence PDT was found to be just as effective as full-fluence PDT in treating cCSC, and half-dose PDT was found to be just as effective as full-dose PDT in treating persistent CSC.Moreover, efficacy is similar between half-time PDT and half-dose PDT.In a retrospective case series, Liu and colleagues compared half-dose PDT with full-fluence to half-dose PDT with half-fluence in patients with cCSC and found that complete resolution of SRF was achieved in 93% and 64% of cases, respectively, which was a statistically significant difference.In a study designed to determine the optimal verteporfin dose for PDT in treating aCSC, Zhao and colleagues tested a range of doses from 10% to 70% and found that 30% was the lowest effective dose.Moreover, in their subsequent study, the same group found that half-dose PDT was superior to a 30% dose, with 95% of patients achieving complete SRF resolution, compared to only 75% of patients in the lower dose group.In a different retrospective study involving 16 patients with cCSC, half-dose PDT was found to be superior to one-third dose PDT, with 100% cases achieving complete resolution of the serous neuroretinal detachment compared to only 33% in the one-third dose group.Thus, PDT treatment using a one-third dose of verteporfin appears to be suboptimal with respect to achieving complete resolution of SRF in CSC.In patients with aCSC, PDT treatment can provide faster SRF resolution, more rapid recovery of retinal sensitivity, and higher BCVA compared to placebo.Indeed, complete resolution of SRF has been reported in 74–100% of patients following PDT treatment.Chan and colleagues performed a randomised controlled trial to compare half-dose ICGA-guided PDT with placebo in patients with aCSC and found a significantly larger improvement in BCVA in the PDT-treated group; moreover, 95% of patients achieved complete resolution of SRF following PDT, which was significantly higher than the placebo group, in which only 58% of patients achieved complete resolution.These results suggest that half-dose PDT may be a suitable treatment option for aCSC, despite the high probability of spontaneous resolution of SRF if left untreated.In a non-randomised retrospective study comparing 11 patients who received half-dose FA-guided PDT and 10 patients who received placebo treatment, Kim and colleagues found complete resolution of SRF in 80% of aCSC patients at 1 month after PDT, 100% at 3 months after PDT, and 90% at 12 months after PDT, compared to only 18%, 27%, and 64% of patients, respectively, in the placebo group.Both ICGA-guided and FA-guided PDT treatment can be effective in aCSC.Achieving rapid resolution of SRF can be important in order to quickly improve BCVA in some patients, for example patients who depend heavily on optimal BCVA for professional reasons.However, Kim et al. found that long-term BCVA outcome and the prevalence of complete resolution of SRF did not differ significantly between patients who received half-dose PDT compared to patients who received placebo, with 90% and 64% of patients achieving complete resolution in the PDT and placebo groups, respectively, after 12 months of follow-up.Importantly, treatment with low-fluence PDT may also decrease the risk of recurrence of SRF in patients with aCSC patients, as a recent study by Ozkaya and colleagues found that 51% of untreated patients had recurrence compared to only 25% of patients who were treated with low-fluence PDT.Finally, our group performed a retrospective study of 295 eyes with aCSC and found that SRF recurred in 24% of untreated eyes compared with only 4% of the eyes that received early treatment consisting primarily of FA-guided half-dose PDT.Some patients have an isolated PED without the presence of SRF.When these patients have such an isolated PED in combination with pachychoroid changes, these cases can be considered a variant of pachychoroid pigment epitheliopathy.Given that PED is a frequent and possibly essential element of the pathogenesis of CSC, it is not surprising that non-neovascular PED without serous detachment of the retina has been seen in fellow eyes of patients with CSC and in patients without CSC, some of whom eventually convert to CSC.In cases where an isolated PED is found under the fovea, associated metamorphopsia may give rise to considerable binocular visual complaints.This has prompted attempts to flatten the PED by PDT in long-standing cases of isolated PED with persistent visual symptoms.Arif et al. found that a single session of PDT was followed by complete resolution of the PED in 7 of 9 eyes.Of 13 untreated eyes, 5 eyes underwent spontaneous resolution of the PED.PDT may be useful especially in cases with underlying pachychoroid on OCT and hyperfluorescent choroidal congestion and hyperpermeability on ICGA.In 2003, the use of ICGA-guided full-setting PDT was first applied to patients with cCSC, with Yannuzzi and colleagues reporting complete resolution of SRF in 12 out of 20 eyes within 6 weeks.In the same year, Cardillo Piccolino and colleagues reported complete resolution in 12 out of 16 eyes within 1 month.Although the risk of both short-term and long-term side effects appears to be relatively low in standard full-setting PDT, several studies have experimented with using either a reduced verteporfin dose for treating cCSC, half-fluence PDT, or half-time PDT.In the PLACE trial, Van Dijk et al. found complete resolution of SRF after ICGA-guided half-dose PDT in 51% and 67% of patients after 6–8 weeks and 7–8 months, respectively.In addition, a non-randomised prospective case series of 18 patients revealed that 85% of patients achieved complete resolution of SRF at 1 month after treatment.The long-term efficacy of half-dose PDT is generally favourable, with SRF resolution rates of 91% and 81% at a mean follow-up of 19 and 50 months, respectively.In a retrospective study in 204 Asian cCSC patients, Fujita and colleagues reported complete resolution of SRF in 89% of patients at 12 months after treatment.Finally, a retrospective study in 52 predominantly Asian cCSC patients found that 93% of patients with cCSC had complete SRF resolution 34 months after reduced-setting PDT, while a separate retrospective study found that 97% of Asian patients with cCSC had no detectable SRF 36 months after half-dose PDT.Another important measure of successful treatment for CSC ‒ in addition to complete resolution of SRF ‒ is retinal sensitivity on microperimetry.Although BCVA is an important parameter in macular diseases such as CSC, BCVA can still be relatively preserved in patients with CSC despite the presence of SRF.In the PLACE trial, the mean retinal sensitivity of patients with cCSC improved by 2 dB and 3 dB at 6–8 weeks and 7–8 months, respectively, after half-dose PDT.Mean retinal sensitivity was also reported to improve within 1 month following half-dose PDT in patients with cCSC, whereas an improvement in BCVA was detected after 3 months.This improved retinal sensitivity may be correlated with reattachment of the cone outer segment tips and the ellipsoid line on OCT.Despite an increase in retinal sensitivity following PDT for unilateral CSC, the final retinal sensitivity remains generally lower than in the unaffected eye.Reduced-setting PDT for cCSC has a favourable long-term BCVA outcome, with an average gain of 5 ETDRS letters measured 7–8 months after treatment, and a mean increase in BCVA from 0.11 to −0.01 logarithm of the minimal angle of resolution units at 12 months.In a 4-year follow-up study, Silva and colleagues reported that patients who received full-setting PDT had a mean increase in BCVA from 59 ETDRS letters at baseline to 67 ETDRS letters at final follow-up visit.Some patients with cCSC may experience a temporary decrease in BCVA shortly after PDT, which may be due to an abrupt reattachment of photoreceptors and/or a temporary increase in SRF, which occasionally occurs together with transient thickening of the choroid.Treating cCSC using PDT can also lead to a decrease in central retinal thickness, which has been described as a desired effect.However, large variations in the methods used to measure central retinal thickness preclude a comprehensive analysis of cumulative data.In a study in which SRF was included in the measure of central retinal thickness, the decrease in thickness was not correlated with BCVA.However, SRF should not be included when measuring central retinal thickness.Thus, to exclude SRF, which was inappropriately included in some of the previous studies regarding PDT in CSC, the distance between the internal limiting membrane and the ellipsoid zone on spectral-domain OCT can be measured and used as a surrogate measure of central retinal thickness.Using this approach, we recently reported that half-dose PDT actually causes a slight increase in central retinal thickness.Patients who do not achieve complete SRF resolution after reduced-setting PDT may experience a smaller reduction in central retinal thickness compared to patients who achieved complete resolution.Recurrent SRF after initial complete SRF resolution following ICGA-guided half-dose PDT for cCSC occurred in 13% of patients measured at a mean follow-up of 19 months, and in 18% of patients measured at a mean follow-up of 50 months.In a retrospective study of 75 eyes with CSC treated with half-dose PDT or placebo and followed for at least 3 years, only 20% of eyes in the half-dose PDT group had recurrent CSC compared to 53% of untreated eyes.Interestingly, the rate of recurrence after half-dose PDT is higher among patients with bilateral cCSC compared to patients with unilateral cCSC.Moreover, a 4-year follow-up study of cCSC patients by Silva and colleagues found that 3 out of 46 eyes had persistent SRF 4 years after full-dose PDT.Several putative predictors of treatment outcome following PDT for CSC have been proposed.For example, PDT can be ineffective and/or have a high rate of recurrence in patients with cCSC who have: 1) PCRD, 2) an absence of an intense hyperfluorescent area on ICGA, 3) poor baseline BCVA, 4) a disruption in the ellipsoid zone, 5) a diffuse hyperfluorescent pattern on ICGA, and/or 6) the presence of shallow irregular RPE detachments on OCT.On the other hand, patients with cCSC generally respond better to half-dose PDT compared to HSML treatment regardless of the presence of either focal or diffuse leakage on FA.This may indicate that the same pathophysiological processes are involved in both cCSC with focal leakage and cCSC with diffuse leakage.When subretinal deposits are visible on FAF, foveal damage may already exist and may not be restored following PDT.When atypical features such as massive exudation with large serous retinal detachment and multiple white subretinal deposits are present, PDT can also be effective.An absence of hyperfluorescent abnormalities on ICGA in cCSC can be predictive of a non-resolving serous neuroretinal detachment following PDT.Finally, Breukink and colleagues found no difference between cCSC patients who use corticosteroids and cCSC patients who do not use corticosteroids with respect to outcome following PDT, with complete resolution of SRF in 69% and 50% of patients, respectively.To date, only a few side effects have been reported in association with PDT using the standard treatment settings that were previously described for treating AMD.These side effects can include nausea, headache, dyspnoea, syncope, dizziness, a decrease in BCVA, and possible side effects at the site of verteporfin infusion, including pain, oedema, inflammation, and extravasation.Rare side effects that have been reported include hypersensitivity reactions to the infusion, temporary renal artery stenosis, and non-perfusion of the choroidal vasculature at the treated area.Therefore, patients should be monitored closely during the PDT procedure.Contraindications for PDT include pregnancy, porphyria, and poor liver function.Neither systemic nor ocular side effects were observed in a study involving 46 eyes with cCSC in 42 patients who were followed for 4 years after full-dose PDT treatment.In contrast, adverse events were reported in non-human primates after full-dose PDT and included RPE proliferation, closure of the choroidal vasculature, foveal thinning, and retinal oedema.The severity and risk of adverse effects following PDT can increase when fluency is doubled from the standard fluence of 50 J/cm2, and corresponding to 4 times the fluence used in half-fluence PDT that is often used for the treatment of CSC.In a meta-analysis of studies comparing full-dose PDT and placebo-treated patients with AMD and CNV, Azab and colleagues found a higher rate of visual disturbances in the PDT-treated group compared to the placebo group, including abnormal vision, decreased vision, and visual field defects.Moreover, they found that 1–5% of patients treated with full-dose PDT had an acute decrease in visual acuity; interestingly, BCVA still improved by at least 1 line in 71% of patients who experienced this acute decrease in visual acuity.Few severe side effects have been reported in association with PDT for CSC.For example, a case report of one patient with cCSC and two patients with serous PED who developed severe choroidal ischaemia after receiving full-setting PDT has been published.Moreover, a transient loss of visual acuity was reported in a patient with cCSC following half-fluence PDT; visual acuity recovered within 22 months.When using full-dose PDT in patients with CSC, the presence of fibrin underneath the neurosensory detachment may increase the treatment reaction by conjugating verteporfin with fibrin.Therefore, caution is advised in these cases with subretinal fibrin, although there currently is no clear evidence with respect to using PDT in such cases.To minimise the risk of PDT-related side effects, reduced-setting PDT was developed for CSC.Overall, reduced-setting PDT is well-tolerated, and no treatment-related severe adverse events such as CNV or RPE atrophy have been reported by the many studies conducted to date.The relatively low risk of systemic photosensitivity can be reduced further using half-dose PDT instead of half-fluence PDT.Thus, treating ophthalmologists may wish to consider whether this side effect is a high risk for their patients and ‒ if so ‒ may opt for half-dose PDT rather than half-fluence or half-time PDT.In a study involving 39 aCSC patients who were treated with half-dose PDT, no ocular or systemic side effects were observed during 12 months of follow-up.Similarly, in the PLACE trial, no ocular or systemic side effects were observed in 89 cCSC patients treated with half-dose PDT during a follow-up period of 7–8 months.Recently, Fujita and colleagues reported no systemic or ocular side effects in 204 eyes with cCSC treated with half-dose PDT, with the sole exception of a polypoidal lesion 8 months after treatment in one eye; however, given that CSC is part of the pachychoroid disease spectrum, this side effect cannot be attributed definitively to PDT, but may represent the natural course of the disease.Despite the overall favourable safety profile of PDT in treating CSC, a retrospective study involving either full-dose or reduced-setting PDT revealed RPE atrophy in 10 out of 250 eyes and an acute severe visual decrease in 4 out of 265 eyes.In a study involving 199 patients with severe cCSC with pre-existing fovea-involving RPE atrophy, Mohabati and colleagues found a decrease of > 2 ETDRS lines in 9 patients after PDT; in three of these patients, the decrease in BCVA was permanent and involved a loss of 11–13 ETDRS letters.Although the vision loss in this very specific category of severe cCSC with fovea-involving RPE atrophy may be due to the PDT treatment, it is also possible that the progressive RPE atrophy is part of the natural course of this more severe form of cCSC.However, this relatively small minority of patients with cCSC with extensive foveal RPE atrophy should be counselled regarding the risk of further vision loss following PDT, and further studies are needed in order to investigate these findings in further detail.Some patients with cCSC may require re-treatment with reduced-setting PDT due to recurrence of SRF or persistent SRF.However, in the PLACE trial, a second treatment with half-dose PDT was able to achieve complete resolution of SRF in only 32% of cases.The risk of not responding to PDT treatment may be high in patients who present with hypofluorescence on ICGA at the area corresponding with the focal leakage point on FA.Repeat PDT treatment may still be effective, particularly in patients who have a serous retinal detachment with SRF when the leakage results from persistent ‒ or recurrent ‒ hyperfluorescent choroidal changes on ICGA in association with focal leakage on FA.For example, this may be the case when these areas were not included in the initial PDT treatment spot.Whether repeat treatment can induce cumulative changes in the choroid that can eventually lead to adverse effects such as RPE atrophy is currently unknown; therefore, some groups limit the maximum number of PDT treatments for CSC to 2 or 3 treatments per eye.Experimental evidence suggests that inhibiting VEGF has an anti-proliferative and anti-hyperpermeability effect on choroidal endothelial cells.In addition, several clinical studies involving patients with AMD and diabetic macular oedema have shown that inhibiting VEGF has a robust inhibitory effect on leakage and fibrovascular proliferation, decreases choroidal blood flow, and reduces central choroidal thickness.Because CSC is believed to originate from the choroidal vasculature, intravitreal injections of anti-VEGF compounds such as bevacizumab, ranibizumab, and aflibercept have been suggested as a possible treatment for CSC by modifying choroidal vascular permeability.However, the use of anti-VEGF injections for treating CSC is generally off-label, so informed consent should be obtained from the patient prior to treatment, emphasizing this off-label use.Although some studies have investigated the use of anti-VEGF for CSC, no large, prospective randomised controlled clinical trials have been performed.Some studies found a positive effect.For example, Artunay and colleagues found that 80% of 15 patients treated with bevacizumab had complete resolution of SRF, compared to 53% of 15 untreated control group patients.In a prospective study of 20 patients with aCSC who received ranibizumab and 20 patients who received no treatment, SRF resolved in 4 weeks compared to 13 weeks, respectively.In a randomised, non-controlled pilot study involving 8 cCSC eyes treated with 3 intravitreal injections of ranibizumab, 2 eyes had complete resolution of SRF at 3 months.However, in a subsequent prospective study with ranibizumab, Bae and colleagues reported complete resolution in only 13% of cCSC eyes treated with ranibizumab after 12 months, compared to 89% of eyes treated using low-fluence PDT.Despite these positive reports, however, a meta-analysis failed to confirm the putative positive effects of bevacizumab, ranibizumab, or aflibercept for aCSC, although the authors did suggest that certain subtypes of cCSC might benefit from anti-VEGF treatment.This may be particularly true for patients with CSC with associated CNV.A prospective pilot study involving 12 cCSC patients revealed that intravitreal aflibercept led to complete resolution of SRF in 6 patients, but had no significant effect on BCVA.Moreover, changes in choroidal thickness have been observed after intravitreal injections of anti-VEGF.Specifically, Kim and colleagues reported that choroidal thickness was decreased by an average of 22 μm in 42 cCSC eyes measured at a mean follow-up of 9 months after the start of intravitreal injections of bevacizumab.This decrease is similar to the results reported using aflibercept and bevacizumab for AMD, which both resulted in a decrease in choroidal thickness of approximately 36 μm.Because these studies were performed before the availability of OCT angiography, it is unclear whether the SRF that resolved occurred due to CSC or due to secondary CNV, which is difficult to distinguish based on FA and ICGA images.Given the lack of large prospective trials and the unknown explanation for its efficacy in CSC, intravitreal injections of anti-VEGF agents should probably be limited to patients with CSC together with CNV and/or polypoidal choroidal vasculopathy, as discussed in section 3.3.Elevated levels of cortisol and endogenous mineralocorticoid dysfunction have been described in CSC patients.Moreover, there appears to be an association between corticosteroid use and CSC, and rats that received corticosteroids have increased expression of MRs. These findings led to the hypothesis that MR antagonists may be used to treat CSC.Pilot studies using the MR antagonists eplerenone and spironolactone in patients with CSC have yielded promising results."However, the patient's renal function and potassium levels should be monitored closely before treatment and at regular intervals during treatment, as MR antagonists can induce hyperkalaemia, which has been associated with cardiac arrhythmia.Patients whose serum potassium level exceeds 5.5 mEq/L and/or have a creatinine clearance rate of ≤30 mL/min should not receive treatment with MR antagonists.On the other hand, patients with a relatively thick choroid may respond better to treatment with MR antagonists.Glucocorticoids likely play a role in the pathogenesis of CSC, and glucocorticoid receptors are expressed in both the retina and choroid.In rats, corticosterone can cause choroidal thickening, a feature common among patients with CSC.This finding has prompted experimental treatment of CSC using the glucocorticoid receptor antagonist mifepristone.Spironolactone is a potassium-sparing diuretic that binds to the distal tubule in the kidney as a binding competitor of aldosterone.Spironolactone slows the exchange of sodium and potassium in the distal tubule and has been approved for treating congestive heart failure and primary hyperaldosteronism.The most common side effects of spironolactone are headache, diarrhoea, fatigue, gynaecomastia, decreased libido, and menstrual disruption.Patients treated with spironolactone must be monitored closely for hyperkalaemia, which can induce cardiac arrest.Patients with diabetes mellitus, liver disorders, kidney disorders, and elderly patients are particularly at risk.Contraindications for spironolactone use include the concomitant use of potassium supplements, the use of potassium-sparing diuretics, the use of potent CYP3A4 inhibitors, or the combined use of an angiotensin-converting enzyme inhibitor with an angiotensin receptor blocker, as taking these drugs together with spironolactone can increase the risk of hyperkalaemia and subsequent cardiac arrhythmia.Several studies have shown beneficial effects of spironolactone in CSC, including improved BCVA, reduced choroidal thickness, and reduced SRF.In a randomised controlled crossover study involving 15 patients with non-resolving CSC, spironolactone treatment was associated with an average reduction in choroidal thickness of 102 μm, compared to only 10 μm with placebo; the number of patients who achieved complete resolution of the serous neuroretinal detachment was not reported.In a prospective case series, 16 eyes with cCSC were treated with 25 mg spironolactone per day for at least 6 weeks, resulting in complete resolution of SRF in 7 eyes and a significant increase in BCVA compared to baseline.In a prospective clinical trial involving 21 eyes with cCSC treated with 25 mg spironolactone twice daily, 15 eyes had decreased SRF on OCT 12 months after the start of treatment.In another prospective randomised controlled clinical trial involving 30 eyes with aCSC, a significantly higher percentage of eyes had complete SRF resolution at two months in the spironolactone-treated group compared to the observed control group.Recently, Kim and colleagues retrospectively analysed the outcome after using spironolactone to treat 17 eyes with steroid-induced CSC; the authors found complete SRF resolution in 14 eyes of patients who remained on glucocorticoids.Despite these promising initial results, prospective randomised controlled trials of sufficient power and duration are needed in order to fully evaluate the clinical benefits of using spironolactone for CSC.Similar to spironolactone, eplerenone is primarily used to treat heart failure.Eplerenone was originally designed to avoid the hormone-associated side effects of spironolactone, serving as a more selective MR antagonist due to the addition of a 9,11-epoxide group.Although eplerenone likely has more tolerable side effects compared to spironolactone, eplerenone does not appear to be clinically superior to ‒ and possibly not equivalent to ‒ spironolactone in treating CSC.However, after the patent on eplerenone expired, the price difference between eplerenone and spironolactone became negligible, and patients ‒ particularly male patients ‒ should first try eplerenone, as it is far less likely to induce gynaecomastia and mastalgia, aside from other possible side effects."Before starting eplerenone treatment, the patient's serum potassium and creatinine levels should be checked.Different approaches for the monitoring of serum potassium exist, and the following protocol is an example.Treatment with eplerenone should not be initiated if serum potassium is > 5.5 mEq/L or if the creatinine clearance is ≤ 30 mL/min.Patients usually commence with 1 dose of 25 mg eplerenone a day.Serum potassium should be reassessed after approximately 1 week.If potassium is < 5.0 mEq/L, the eplerenone dose is increased.If serum potassium is between 5.0 and 5.4 mEq/L, eplerenone treatment should remain at the current dose.If serum potassium is between 5.5 and 5.9 mEq/L, eplerenone is reduced.When serum potassium is ≥ 6.0 mEq/L, eplerenone treatment should be stopped, but can be restarted when serum potassium levels fall below 5.5 mEq/L.Serum potassium levels should be checked monthly, and the dosage should be adjusted accordingly.Patients taking eplerenone should be instructed to contact their physician if they experience any side effects such as nausea, diarrhoea, dizziness, or headache, which can occur in up to 10% of patients.Similar to spironolactone, contraindications for eplerenone include the use of potassium supplements, potassium-sparing diuretics, potent CYP3A4 inhibitors, or combined treatment with an angiotensin-converting enzyme inhibitor and angiotensin receptor blocker.In a prospective pilot study, Bousquet and colleagues prescribed eplerenone to 13 patients with cCSC and reported a reduction in SRF, reduced central macular thickness, and improved BCVA.Cakir and colleagues retrospectively reported that 29% of patients with cCSC who failed to respond to oral acetazolamide, intravitreal bevacizumab, focal laser photocoagulation, or PDT achieved complete resolution of SRF after a median of 106 days of daily eplerenone.Other studies have also shown that eplerenone can have clinical value, as summarised in Table 4.Importantly, an absence of CNV on OCT angiography and the presence of a focal leakage point on ICGA may serve as predictive factors for complete resolution of SRF following eplerenone treatment.On the other hand, patients who present with widespread changes in the RPE may benefit less from eplerenone treatment compared to patients who present without these abnormalities.The duration of eplerenone treatment in published studies ranged from 1 month to 51 months, and although the effects of eplerenone in CSC should be evident within a few months, information on when treatment effect may occur is scarce.To date, no large, prospective randomised controlled trials have been conducted in order to measure the efficacy or long-term outcome of using eplerenone or spironolactone to treat CSC.However, two prospective studies are currently in progress; one study is designed to compare eplerenone with sham treatment, and the other is designed to compare eplerenone with half-dose PDT.Mifepristone is a glucocorticoid antagonist that binds to the cytosolic glucocorticoid receptor and prevents gene transcription by blocking recruitment of coactivators, thus rendering the receptor complex inactive.In addition, mifepristone competes with progesterone binding to the progesterone receptor.Mifepristone is currently approved for pharmaceutically induced abortion.Given that steroids are the most important external risk factor for developing CSC, stimulation of the glucocorticoid receptor may play a role in the pathogenesis of CSC, thereby providing the rationale for using mifepristone to treat CSC.In a prospective study of 16 patients with cCSC who received 200 mg/day mifepristone for up to 12 weeks, 5 patients had an improvement in BCVA of ≥5 ETDRS letters, with no severe adverse events reported.Currently, a randomised placebo-controlled clinical trial designed to test the effects of mifepristone in 16 patients with CSC is underway, and the results of this study are expected to be released in the near future.A variety of other oral pharmaceutical-based treatments have been reported, primarily from relatively small, retrospective studies.These studies should be interpreted with caution, as spontaneous recovery is common in aCSC, and spontaneous improvement and resolution can also occur in cCSC.Treatment with high-dose antioxidants was studied in patients with aCSC in a randomised placebo-controlled trial.In a group of 29 patients who received high-dose antioxidants, 22 achieved complete resolution of SRF, compared to 14 out of 29 patients who received placebo.It is important to note that during this trial, patients were able to receive additional treatments as needed, which complicates the analysis of the putative effects of antioxidants.Oral administration of a curcumin-phospholipid formulation, which purportedly has antioxidant and anti-inflammatory properties, was found to reduce the height of the neurosensory retinal detachment in 78% of 12 patients with CSC, although no information was provided with respect to whether the patients had aCSC or cCSC.According to the currently available evidence, there is no clear indication for treating CSC using antioxidants.Topical application of the nonsteroidal anti-inflammatory drug nepafenac has also been suggested for treating aCSC.Alkin and colleagues retrospectively studied this hypothesis and found a significantly larger improvement in BCVA after 6 months in 31 eyes treated 3 times daily for 4 weeks or until complete SRF resolution, compared to an untreated control group; moreover, at the 6-month follow-up visit 14 out of 17 eyes in the treatment group had complete resolution of SRF, compared to 6 out of 14 eyes in the control group, and no treatment-related or systemic side effects were reported.In a case report, Chong and colleagues reported that a patient with aCSC who received topical ketorolac had SRF resolution after 18 weeks.These relatively small studies should be supported by more robust evidence before NSAIDs can be introduced into clinical practice for treating CSC.Rifampicin is used primarily for its antimicrobial properties, but it can also affect the metabolism of endogenous steroids by upregulating cytochrome P450 3A4.The 5′-untranslated region in the CYP3A4 gene includes glucocorticoid regulatory elements that may be altered in CSC.A prospective single-arm study of rifampicin showed that SRF resolved in 4 out of 14 eyes at 6 months; treatment was discontinued in two patients due to cholelithiasis and increased blood pressure.Moreover, a single case report described the resolution of SRF in a patient with cCSC 1 month after the start of rifampicin treatment.In addition, Venkatesh and colleagues performed a retrospective analysis of patients with cCSC who were treated with rifampicin and found that 4 eyes with focal leakage on FA had complete resolution of SRF after an average follow-up of 10 months; in contrast, the eyes with diffuse leakage on FA had persistent SRF.Finally, an observational clinical study of 38 eyes in 31 patients with idiopathic CSC revealed that rifampin improved mean BCVA from 0.56 to 0.47 LogMAR units measured 4 weeks after cessation of treatment.Despite these promising results, further studies have low priority given the side effects associated with rifampicin and the relatively slow treatment response with CSC ‒ if at all ‒.Aspirin inhibits platelet aggregation and may reduce serum levels of plasminogen activator inhibitor 1, which can be increased in CSC.A prospective case series described a positive effect of aspirin in 109 patients with unspecified CSC patients; specifically, BCVA improved to a larger extent in patients who were treated with aspirin compared to historical control patients.However, because the control group was based on retrospective data, the conclusions should be interpreted with caution.Thus, there is currently extremely limited evidence supporting the notion that aspirin is a viable treatment for CSC with aspirin.In a case report, Tatham and Macfarlane found that treating two patients with recurrent CSC with the selective β1 receptor blocker metoprolol resulted in resolution of SRF in both patients.In a small, randomised controlled trial, the effect of the non-specific beta-blocker nadolol was evaluated in 8 patients with unspecified CSC; the authors found that the size of the subretinal detachment was reduced to a lesser extent in patients who were treated with nadolol compared to patients who received placebo, although difference was not statistically significant.This finding suggests that nadolol may actually reduce the likelihood of achieving SRF resolution and is therefore unlikely to be useful in treating CSC.Consistent with this notion, a recent study found that a different non-specific beta-blocker ‒ metipranolol ‒ had no significant effect in patients with aCSC with respect to resolving SRF compared to control-treated patients.Taken together, the currently available evidence suggests that beta-blockers are not likely to be a viable treatment for CSC.Oral administration of the carbonic anhydrase inhibitor dorzolamide was found to have clinical benefits in treating cystoid macular oedema in patients with retinitis pigmentosa.Subsequently, Wolfensberger and colleagues hypothesised that acidification of the subretinal space increases fluid resorption through the RPE, possibly due to a perturbation in carbonic anhydrase type IV, leading to the proposal that carbonic anhydrase inhibitors may be a viable option for treating CSC.However, although a retrospective study of 15 patients with unspecified CSC indicated that oral treatment with the carbonic anhydrase inhibitor acetazolamide can reduce the time until complete resolution of SRF compared to untreated control group, BCVA and the rate of recurrence did not differ between acetazolamide-treated and untreated patients.Moreover, large, well-designed studies designed to assess the clinical benefits of inhibiting carbonic anhydrases in patient with CSC have not been performed.Thus, carbonic anhydrase inhibitors are not likely to be a viable treatment for CSC.Finasteride is an inhibitor of dihydrotestosterone synthesis and is used to treat benign prostatic hyperplasia and hair loss.Because androgens such as testosterone may play a role in CSC, finasteride has been evaluated as a possible treatment for CSC.However, a pilot study involving 5 patients with cCSC found that taking 5 mg/day finasteride for 3 months had no effect on BCVA measured at 6 months compared to baseline; the rate of SRF resolution was not reported.In contrast, a retrospective review of 23 patients with cCSC found that 76% of patients who were treated with finasteride had complete SRF resolution after a mean follow-up duration of 15 months.With respect to side effects, two patients in the pilot study by Forooghian and colleagues reported a loss of libido, whereas no side effects were observed by Moisseiev and colleagues.These relatively preliminary studies should be followed by larger studies in order to evaluate whether finasteride treatment can benefit patients with CSC.Infection with the bacterium H. pylori has been proposed as a risk factor for CSC, although this putative association has not been demonstrated conclusively.H. pylori infection can be eradicated using metronidazole or omeprazole together with amoxicillin and/or clarithromycin.Interestingly, successful eradication of H. pylori in patients with unspecified CSC has been reported to lead to more rapid resolution of SRF in a retrospective, comparative study of 25 patients compared to 25 untreated patients who did not have an H. pylori infection.With respect to aCSC, eradicating H. pylori was found to improve retinal sensitivity but had no effect on BCVA or complaints of metamorphopsia.Nonetheless, a prospective, randomised, case-controlled, non-blinded study involving 33 patients with aCSC and H. pylori found that treating the H. pylori infection improved BCVA and retinal sensitivity measured using automated static perimetry.As noted above, there is currently no compelling evidence supporting the notion that H. pylori infection is a major risk factor for CSC, and the evidence to date to support the idea that eradicating H. pylori may serve as a possible treatment for CSC is limited.Nevertheless, patients with CSC should be tested for H. pylori if they present with symptoms associated with this bacterial infection such as stomach ache or heartburn.Ketoconazole is primarily used as an anti-fungal agent, but it also has glucocorticoid receptor antagonising properties.These glucocorticoid-related effects may be of clinical value in treating CSC, as CSC may be associated with an upregulation of glucocorticoid receptors.Two studies examined the effects of oral ketoconazole in 15 patients with aCSC and 5 patients with cCSC.The authors found that ketoconazole decreased endogenous urine cortisol levels, but had no significant effect on visual acuity or serous neuroretinal detachment; moreover, erectile dysfunction and nausea were reported in one patient each.These results indicate that further study is warranted before ketoconazole can be considered as a possible first-line treatment for CSC.The effects of melatonin on the circadian rhythm have been suggested to also have positive effects in CSC.To test this hypothesis, Gramajo and colleagues performed a prospective, comparative case study in which 13 cCSC patients were treated with melatonin.The authors found that the patients who received melatonin had a larger improvement in BCVA compared to a control group.Moreover, 3 out of 13 treated patients had complete resolution of SRF at the 1-month follow-up visit.No side effects were reported.No additional evidence is available regarding the use of melatonin in treating CSC; therefore, further study is warranted.Methotrexate is an antimetabolic, immunosuppressive drug used primarily in treating inflammatory disorders such as rheumatoid arthritis.Because of its non-immunosuppressive properties ‒ for example, its interaction with steroid receptors ‒ methotrexate may be beneficial for treating cCSC.Two studies tested this hypothesis, and both found that treating patients with cCSC for 12 weeks with oral low-dose methotrexate resulted in significant improvements in BCVA.Abrishami and colleagues prospectively studied 23 patients and found that 13 patients had complete resolution of SRF at their 6-month follow-up visit.In a retrospective study by Kurup and colleagues, 9 patients with cCSC were treated with low-dose methotrexate for an average of 89 days, with 83% of patients achieving complete resolution of SRF after an average treatment duration of 12 weeks.Although these results suggest that additional well-designed randomised controlled trials are warranted, methotrexate is a generally unattractive treatment option in CSC, as it can have severe side effects, including bone marrow suppression and pulmonary, hepatic, and renal toxicity.Several small studies and case reports have described non-conventional treatments for CSC, including wearing an eye patch, intravitreal injections of dobesilate, and acupuncture.Interestingly, an ophthalmologist with CSC reported that he was able to photocoagulate his own leak by ‘sungazing’.Central serous chorioretinopathy is commonly divided into two categories based on the duration of symptoms, the extent of leakage on angiography, and the presence of RPE atrophy; these two categories are aCSC and cCSC.Chronic CSC can be complicated by CNV and/or PCRD, which may be viewed upon as specific complicated subcategories of cCSC.According to the literature, most investigators support this incomplete and relatively rudimentary classification of CSC.However, there currently is no clear consensus regarding the criteria for classification, and a better defined classification system is needed."Our current lack of an established classification system complicates the study of the natural disease progression of CSC, its therapeutic management, and the design of interventional trials, which must take into account the relatively early onset maculopathy, the common spontaneous resolution of SRF, and the disease's relatively benign course.Narrowing the scope of clinical CSC subgroups may influence the treatment outcome and may help guiding the development of treatments tailored to each clinical subtype of CSC.In this regard, safety is of the utmost importance when developing new treatment strategies for CSC, given that CSC usually presents early in life and has a relatively benign disease course.A summarising flowchart with a proposal for decision making in treatment of aCSC is shown in Fig. 7.Because of the high rate of spontaneous SRF resolution within three to four months in aCSC, observation during the first four months is the most widely used strategy, except in patients who require rapid SRF resolution and visual rehabilitation, for example for professional reasons, or in cases with outer segment atrophy and/or granular debris in the subretinal space.Although aCSC often resolves spontaneously, retinal damage can still occur in the early phases and may progress as long as the serous neuroretinal detachment persists due to SRF accumulation.An essential insight gained with OCT is that the SRF may not be resolved, yet the residual subfoveal fluid can be so shallow that it evades detection by slit-lamp biomicroscopy.This residual detachment can still lead to atrophy of photoreceptor outer segments and vision loss over a period of years.Thus, the prevailing clinical recommendation of waiting four months after presentation before considering intervention is not strongly supported by objective evidence.This recommendation also fails to take into consideration the fact that OCT can be used to diagnose atrophic photoreceptor outer segments due to months or years of chronic foveal SRF, even in the absence of RPE abnormalities.The goals of an intervention in aCSC should be to reduce the time needed to restore vision and to stabilise the visual prognosis.In practice, this means that SRF should be resolved and recurrent serous neuroretinal detachment should be prevented.Although some treatments such as PDT, HSML, and eplerenone can decrease the time needed to achieve complete resolution of SRF, solid data from large prospective trials are currently lacking, particularly with respect to HSML and eplerenone.Photocoagulation of a focal leak on angiography can sometimes lead to the rapid and complete resolution of SRF; however, these ‘ideal’ cases ‒ which have a solitary source of leakage at a relatively safe distance from the fovea ‒ are uncommon, and limited evidence is available with respect to long-term efficacy and safety.Photocoagulation does not clearly address the underlying choroidal leakage, and it carries the risk of inducing CNV, symptomatic paracentral scotoma, and/or a chorioretinal adhesion with secondary intraretinal cystoid oedema.Half-dose PDT was shown to increase the likelihood of SRF resolution and improved visual outcome in the sole reasonably-sized prospective, double-masked, placebo-controlled, randomised clinical trial in aCSC conducted to date.In addition, retrospective evidence suggests that the risk of recurrence of SRF leakage in aCSC is reduced following PDT.Based on current evidence, relatively early treatment with half-dose PDT may be considered the treatment of choice in patients with active aCSC who had previous episodes of SRF, patients with bilateral disease activity, and/or patients who rely on their vision for professional reasons.In the event of persistent SRF following half-dose PDT, the clinician may consider re-treatment or another treatment strategy such as an MR antagonist or HSML.ICGA-guided PDT may be the treatment of choice for aCSC, as it may also target the primary choroidal abnormalities; however, large, prospective randomised studies are needed in order to establish a clear basis for an evidence-based approach to treating aCSC.A summarising flowchart with a proposal for decision making in treatment of cCSC is shown in Fig. 8.The persistence of SRF in cCSC is associated with partly irreversible, progressive photoreceptor damage, leading to loss of visual acuity and an accompanying loss of vision-related quality of life.Therefore, the aim of treatment should be to stop this progression and to improve vision.The most commonly used treatments for cCSC are PDT, eplerenone, HSML, and argon laser photocoagulation.HSML can induce complete resolution of SRF in 14–71% of CSC patients, with a more favourable outcome in patients with a focal leakage spot on FA compared to patients with diffuse leakage.The PLACE trial, an investigator-initiated study, is the only large, prospective multicentre randomised controlled trial comparing ICGA-guided 810 nm HSML with ICGA-guided half-dose PDT in patients with cCSC.In this trial, half-dose PDT was superior to HSML in terms of both short-term complete resolution of SRF and long-term complete resolution of SRF.Moreover, at 6–8 weeks, both the increase in BCVA and retinal sensitivity on microperimetry were significantly higher in the half-dose PDT group compared to the HSML group.No comparable data are available for HSML using a 577 nm laser.In addition to significant differences in treatment efficacy measured the PLACE trial, the value of using HSML to treat cCSC is further complicated by the wide range of treatment regimens, laser settings, and wavelengths that have been reported thus far.Treatment with MR antagonists has been associated with complete resolution of the neuroretinal detachment in 20–66% of patients.Although spironolactone and eplerenone appear to be similarly effective at their respective preferred doses, eplerenone is preferred due to its favourable safety profile.Recurrences of SRF are more likely to occur after spironolactone than after half-dose PDT.The current evidence available from clinical studies is considerably less convincing for MR antagonists compared to PDT and HSML, which stems primarily from retrospective studies that reported lower rates of SRF resolution compared to PDT.The results of the PLACE trial are supported by a large body of retrospective evidence indicating that 62–100% of patients with cCSC can achieve complete SRF resolution following PDT, with reported patients numbers that are vastly higher than those studied in both HSML and eplerenone treatment.Although no large study has compared MR antagonists with PDT, such a study is currently underway.Importantly, the risk of treatment-related side effects is relatively low, and neither eplerenone nor PDT treatment appears to induce permanent damage to the choriocapillaris.Laser photocoagulation may be considered for patients with cCSC with focal leakage located outside of the macular area, for example when PDT is either unavailable at the treatment centre or cost-prohibitive.However, long-term outcome following photocoagulation does not appear to be superior to no treatment, although the evidence to date is relatively scarce.On the other hand, other treatments such as half-dose PDT do not have these limitations and have good long-term safety profiles, with few reported side effects.Based on currently available data, half-dose or half-fluence PDT appears to be the most effective and safest treatment for cCSC without additional complications.However, it should be noted that half-dose PDT treatment with verteporfin is considerably more expensive compared to some other treatments, and requires the use of a specific laser machine.When half-dose PDT is unavailable and/or cost-prohibitive, other treatments can be considered, including focal argon laser at eccentric focal leakage points on FA, MR antagonists, and HSML; the choice of treatment should be based on a case-by-case discussion, as robust evidence with respect to these non-PDT treatment modalities is currently lacking.Half-dose or half-fluence PDT may also be considered in symptomatic cCSC patients with SRF outside the fovea.In the case of persistent SRF following half-dose PDT, re-treatment with PDT or another treatment such as MR antagonists or HSML may be considered.Moreover, further research ‒ preferably prospective studies ‒ is needed in order to determine whether the relatively small subgroup of patients who have extensive atrophic RPE changes that include the fovea should be excluded from PDT due to the risk of irreversible mild to moderate vision loss, which was reported to occur in up to 2% of patients in this highly specific subtype of severe cCSC.Alternative treatments such as MR antagonists and HSML may be considered in this subgroup of patients with severe cCSC that includes atrophic RPE changes affecting the fovea, even though these treatments appear to be less effective than PDT in accomplishing resolution of SRF accumulation in these cases.Macular subretinal neovascularisation can occur in patients with CSC, and presents most often in patients with severe cCSC.Moreover, CNV was reported to occur in 2–18% of cCSC patients.Although CNV can be present at the start of a CSC episode, it can also develop gradually, particularly in patients over the age of 50 and/or patients with prolonged disease.Subretinal leakage from type 1 neovascularisation due to pachychoroid neovasculopathy can mimic uncomplicated cCSC.CNV can be identified using multimodal imaging techniques such as OCT, FA, ICGA, and ‒ in particular ‒ OCT angiography, although this detection can be challenging in small, early-stage CNV and severe cCSC with extensive chorioretinal abnormalities.Therefore, it may not be uncommon for a patient to be initially diagnosed with having CSC without CNV, even though a small CNV may have actually been present at the time of diagnosis.The clinician should suspect CNV particularly in patients who were relatively old at onset, have a mid/hyperreflective signal below a flat irregular RPE detachment, a putative CNV structure on OCT angiography, and/or a well-demarcated CNV ‘plaque’ on ICGA.Because up to two-thirds of patients with CSC with CNV can have a polypoidal component, ICGA is an important imaging tool for identifying and localising these polypoidal structures.The standard treatment for CSC complicated by active subretinal CNV is intravitreal anti-VEGF treatment possibly supplemented by half-dose or half-fluence PDT, as several studies have demonstrated good efficacy in these cases.For example, the MINERVA study found that intravitreal ranibizumab is effective in CNV with an unusual origin, including CNV due to CSC.At the primary endpoint, the authors found that eyes with CNV due to CSC treated with ranibizumab had an improvement in BCVA of 6.6 ETDRS letters, compared with only 1.6 letters in the sham group.With respect to polypoidal choroidal vasculopathy, large randomised controlled trials based on the EVEREST II and PLANET studies found that a combination of full-dose PDT and intravitreal ranibizumab or aflibercept were beneficial.In addition, Peiretti and colleagues recently reported that 50% of polypoidal lesions were closed after full-fluence PDT monotherapy, compared to 25% of lesions in patients who received anti-VEGF monotherapy.An interesting group of patients with vascularised CSC is characterised by flat irregular PEDs in which a thin neovascular network can be detected on OCT angiography but not with other imaging techniques.This so-called ‘silent type 1 CNV’ may actually be quite common in cCSC.However, given that the contribution of this type of CNV to subretinal leakage, as well as its role in the progression of vision loss, has not been investigated, the use of anti-VEGF therapy in these cases should be weighed carefully and may be deferred until active leakage becomes evident.Because of the likelihood of progressing to severe vision loss, treatment should be advocated for patients with cCSC complicated by PCRD.However, the efficacy of standard PDT and half-dose PDT is relatively poor in this patient group.Using various reduced-setting PDT protocols in 25 eyes with severe cCSC with PCRD, Mohabati and colleagues achieved complete resolution of intraretinal fluid in 11 eyes, reduced PCRD in 12 eyes, and observed no changes in 2 eyes at first visit after treatment.In contrast, Silva and colleagues reported complete resolution of intraretinal fluid in 10 out of 10 patients with cCSC and PCRD after treatment with full-setting PDT.The relatively poor responses to PDT could be due to the degenerative pathophysiological nature of PCRD in cCSC, in which factors other than persistent SRF and choroidal-RPE dysfunction become relevant once PCRD becomes chronic.Inconsistent results obtained after using PDT for cCSC with PCRD ‒ regardless of the PDT setting used ‒ may also be due to relatively common presence of diffuse atrophic RPE changes, which can make it difficult to select the area for laser treatment.In evaluating these results, it should be also noted that intraretinal fluid may be reabsorbed at a slower rate than SRF.Moreover, a strong topographic correlation has been found between the cystoid intraretinal spaces and points of chorioretinal adherence at the site of subretinal atrophy and fibrosis.Subretinal fibrotic scars have also been reported to develop from subretinal fibrin in eyes with severe CSC.These scars may represent focal areas of chorioretinal adherence and breakdown of the RPE barrier, providing a direct passage for fluid to diffuse from the choroid into the retina in the case of choroidal hyperperfusion.OCT angiography, FA, and/or ICGA should be performed to rule out the possibility of CNV in patients with cCSC patients with intraretinal fluid, as up to 45% of these cases may indeed have CNV and should be treated accordingly.Some patients do not fit into the classification systems discussed above.For example, in some cases the presence of CNV can be ambiguous.In cases in which the diagnosis is not clear, determining the optimal treatment can be challenging."In such cases, the treatment strategy may depend on a variety of factors, including the patient's wishes, the BCVA and age, the prognosis with respect to disease progression, the treating physician's personal preferences, and a range of other clinical and non-clinical parameters.The results of several ongoing prospective randomised controlled clinical trials will be available in the next few years.These studies include the investigator-initiated multicentre VICI and SPECTRA trials, both of which are expected to report their results within the coming two years.The VICI trial is the first large, prospective multicentre randomised placebo-controlled trial designed to investigate the use of eplerenone in treating cCSC.In this trial, 104 patients with cCSC are randomly allocated to receive either eplerenone or sham treatment.The primary outcome of the VICI trial is BCVA measured at the 12-month follow-up visit.Secondary outcomes include low luminance visual acuity, central macular thickness, height of the SRF, choroidal thickness, and adverse events.The placebo-controlled aspect of this trial will provide valuable information regarding the natural course of cCSC, as both aCSC and cCSC can resolve spontaneously without treatment.The Study on half-dose Photodynamic therapy versus Eplerenone in chronic CenTRAl serous chorioretinopathy is the first prospective multicentre randomised controlled trial designed to compare half-dose PDT with eplerenone treatment with respect to achieving complete resolution of SRF and improving the quality of vision.This study follows the PLACE trial, in which PDT was found to be superior to treatment with HSML in cCSC.The target number of patients to be included in the SPECTRA trial is 107.The primary endpoint of the SPECTRA trial is a measure of the difference between half-dose PDT and eplerenone treatment in patients with cCSC in terms of both complete resolution of SRF on OCT and safety.The secondary functional endpoints include BCVA, retinal sensitivity on macular microperimetry, and vision-related quality of life measured using a validated questionnaire.Additional secondary endpoints include the number of patients who receive crossover treatment in each treatment arm, the mean change in ETDRS BCVA over time among those patients with subsequent treatment and patients without subsequent treatment, and the mean changes in ETDRS BCVA, retinal sensitivity, and NEI-VFQ-25 over time.These parameters are obtained up to two years after enrolment.The results of these trials and other large studies will likely lead to an evidence-based treatment guideline for CSC.At the same time, it is just as important to more accurately define the subtypes of CSC by performing detailed multimodal imaging studies.These studies will facilitate reaching a consensus regarding the classification of CSC, which is urgently needed given that the optimal treatment strategy likely differs among CSC subtypes.It is also essential that intervention studies use comparable clinical endpoints and aim to achieve complete resolution of the serous neuroretinal detachment.In this respect, artificial intelligence and ‘deep learning’ are likely to become important in the diagnosis and follow-up care of retinal diseases, including CSC.For example, artificial intelligence can be used to discover new characteristics and prognostic markers in CSC by analysing large amounts of annotated multimodal imaging data.Deep learning protocols and artificial intelligence may also reveal CSC-specific patterns on multimodal imaging.With the addition of clinical parameters, it may one day be possible to develop an algorithm to support treatment decisions.Moreover, large studies regarding genetic and other risk factors may shed new light on the pathophysiology of CSC.For example, recent studies revealed similar genetic risk loci with partly opposite effects between CSC and AMD.These findings may also have future implications for treating CSC.With the ability to culture choroidal endothelial cells, it may now be possible to study the effects of various substances such as corticosteroids using an in vitro approach.Studies involving these in vitro choroidal cell models may eventually lead to the identification of pathophysiological pathways in CSC and help develop new treatment strategies for CSC.Another emerging topic of interest that warrants further study with respect to preventing and treating CSC is based on the haemodynamic condition of patients with CSC.A growing body of evidence suggests that patients with CSC may have a functional change in the physiological mechanisms that regulate choroidal blood flow, and this change may even be induced by emotional and/or physical stress.The classification and treatment of CSC has long been ‒ and remains today ‒ subject to controversy.In recent years, several relatively large studies regarding the treatment of CSC have been published, some of which were conducted in a multicentre prospective randomised controlled setting.Based on the subtypes of CSC that were roughly defined in these studies, the treatment outcomes and treatment strategies of choice are slowly evolving.With respect to aCSC, treatment can often be deferred, unless specific circumstances such as professional reliance on optimal vision indicate intervention.When treatment is indicated in aCSC, the current evidence suggests that half-dose or half-fluence PDT guided by either ICGA or FA may be the treatment of choice for accelerating SRF resolution, improving vision, and decreasing the risk of recurrence.Based on efficacy and safety data from retrospective and prospective studies such as the prospective multicentre randomised controlled PLACE trial, half-dose PDT should be considered the treatment of choice for cCSC.Thus, the available evidence to support the use of PDT in cCSC may also alleviate current restrictions in reimbursement for this off-label treatment indication.In elderly patients who present with a clinical picture of CSC, the presence of a shallow RPE detachment with mid- or mixed reflectivity below the RPE detachment is highly suggestive of a neovascular membrane, which can be confirmed using OCT angiography and ICGA.ICGA can also be used to visually determine whether such a sub-RPE neovascular membrane has a polypoidal component.Evidence suggests that these CSC cases with subretinal CNV should be treated using intravitreal injections of anti-VEGF compounds and/or half-dose or half-fluence PDT.In the case of polypoidal choroidal vasculopathy, intravitreal anti-VEGF either as a monotherapy or combined with PDT should be considered for targeting the choroidal abnormalities such as pachychoroid and hyperpermeability, as well as the neovascular and/or polypoidal component.Large multicentre randomised controlled trials are currently underway and will likely shed more light on the efficacy of various treatments such as eplerenone, providing a better comparative overview of the principal treatment options that are currently available.The controversy regarding the classification of CSC and the desired clinical endpoints of treatment remain important topics that will need to be addressed in order to optimise the design of future randomised controlled trials.The outcome of these studies will certainly facilitate the establishment of an evidence-based treatment guideline for CSC.no conflicting relationship exists for any author.This work was supported by the following foundations: MaculaFonds, Retina Netherlands, BlindenPenning, and Landelijke Stichting voor Blinden en Slechtzienden, that contributed through UitZicht, as well as Rotterdamse Stichting Blindenbelangen, Haagse Stichting Blindenhulp, ZonMw VENI Grant, and Gisela Thier Fellowship of Leiden University.The funding organizations had no role in the design or conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.They provided unrestricted grants.Components of the study were facilitated by ERN-EYE, the European Reference Network for Rare Eye Diseases.FGH: Consultant to Acucela, Apellis, Allergan, Formycon, Galimedix, Grayburg Vision, Heidelberg Engineering, Novartis, Bayer, Ellex, Oxurion, Roche/Genentech, Zeiss.Research grants from Acucela, Allergan, Apellis, Formycon, Ellex, Heidelberg Engineering, Novartis, Bayer, CenterVue.Heidelberg Engineering, Roche/Genentech, NightStar X, Zeiss.KBF: consultant to Zeiss, Heidelberg Engineering, Optovue, Novartis, and Allergan.He receives research support from Genentech/Roche.ML: ML and his employer the Rigshospitalet have received payments for the conduct of clinical trials and consulting fees from Novartis, Alcon, Bayer, Roche, Oculis, Sanofi, Novo Nordisk, Acucela, AbbVie and GSK.SF: employee.SS: received travel grants, research grants, attended advisory board meetings of Novartis, Allergan, Bayer, Roche, Boehringer Ingelheim, Optos.TYYL: received honorarium for consultancy and lecture fees from Allergan, Bayer, Boehringer Ingelheim, Novartis and Roche; research support from Kanghong Biotech, Novartis, and Roche; and travel grants from Santen.No financial disclosures exist for any of the other authors.
Central serous chorioretinopathy (CSC) is a common cause of central vision loss, primarily affecting men 20–60 years of age. To date, no consensus has been reached regarding the classification of CSC, and a wide variety of interventions have been proposed, reflecting the controversy associated with treating this disease. The recent publication of appropriately powered randomised controlled trials such as the PLACE trial, as well as large retrospective, non-randomised treatment studies regarding the treatment of CSC suggest the feasibility of a more evidence-based approach when considering treatment options. The aim of this review is to provide a comprehensive overview of the current rationale and evidence with respect to the variety of interventions available for treating CSC, including pharmacology, laser treatment, and photodynamic therapy. In addition, we describe the complexity of CSC, the challenges associated with treating CSC, and currently ongoing studies. Many treatment strategies such as photodynamic therapy using verteporfin, oral mineralocorticoid antagonists, and micropulse laser treatment have been reported as being effective. Currently, however, the available evidence suggests that half-dose (or half-fluence) photodynamic therapy should be the treatment of choice in chronic CSC, whereas observation may be the preferred approach in acute CSC. Nevertheless, exceptions can be considered based upon patient-specific characteristics.
464
The health needs and healthcare experiences of young people trafficked into the UK
Human trafficking is “the recruitment, transportation, transfer, harbouring or receipt of persons by means of threat or use of force or other forms of coercion, of abduction, of fraud, of deception, of the abuse of power, or of a position of vulnerability or of the giving or receiving of payments or benefits to achieve the consent of a person having control over another person, for the purpose of exploitation”.Trafficking is believed to affect every country of the world, as countries of origin, transit or destination, and the International Labour Office estimates that up to 20.9 million people worldwide may be in situations of forced labour as a result of human trafficking.In the UK, the Modern Slavery Act 2015 addresses both human trafficking and slavery, defining slavery as knowingly holding a person in slavery or servitude or knowingly requiring a person to perform forced or compulsory labour.An offence of human trafficking is committed if a person arranges or facilitates the travel of another person with a view to that person being exploited, where exploitation refers to slavery, servitude, forced or compulsory labour, sexual exploitation, removal of organs, or the securing of services by force, threats, deception or from children or vulnerable persons.The United Nations Palermo Protocol, which includes the definition quoted above, established that children and young people under 18 cannot consent to their own exploitation regardless of the degree of coercion involved; this concept has been incorporated into UK guidance.The covert and illegal nature of trafficking, together with challenges in achieving a consistent definition, makes for difficulties in measuring its prevalence.Some indication of the scale of the issue in the UK can be obtained from figures provided by the UK National Referral Mechanism which provides the route through which trafficked people can apply for temporary immigration protection, accommodation and support.In 2014, 671 children and young people under 18 were referred into the NRM; the most common countries of origin were Albania, Vietnam, the UK, Slovakia, and Nigeria.However, this is far from a full picture since this only includes those who have exited from the trafficking situation and are in contact with statutory and voluntary agencies permitted to make referrals on behalf of children they suspect may have been trafficked.Fears of recriminations from traffickers and/or arrest or deportation by the authorities can act as barriers to help-seeking and use of official agencies.Adolescents’ mistrust may be heightened further because they are often obliged to prove their status as children in order to access support from children’s social services.Many fear that they will lose their right to stay in the UK at the age of 18.The experiences and needs of trafficked children and young people have also been difficult for researchers to capture and there are similar reasons for this, although high levels of vulnerability together with gatekeepers’ concerns about the safety and confidentiality of this group may also play a part.Few studies have been able to access trafficked young people directly.One exception is Kiss et al.’s survey of 387 10–17 year olds in the Greater Mekong Subregion.The authors found that the girls participating in their study had been trafficked primarily for forced sex work.Over half the young people in their sample reported symptoms indicative of depression, one in three had symptoms of an anxiety disorder and 12% had tried to harm or kill themselves in previous month.In the UK, Franklin and Doyle interviewed 17 young people aged between 15 and 23 who had been trafficked as children.They also surveyed local authorities and completed telephone interviews with key stakeholders.Their findings identified a high level of need for mental health services and highlighted poor continuity of care for trafficked children who had to retell their histories of abuse and exploitation to numerous social workers.Some evaluations of initiatives for young people who are either asylum seekers or trafficked, such as Crawley and Kohli’s largely positive evaluation of the Scottish Guardianship pilot service included interviews with and case file studies of small numbers of young people who had been trafficked.This study found that Guardians could play a key role in assisting these young people to navigate and access health services.Varma, Gillespie, McCracken, and Greenbaum’s US study used case file review to study 84 children aged 12–18 presenting at hospital emergency departments or at a child protection clinic, of whom 27 were defined as victims of commercial child sexual exploitation or trafficking.Over 50% of this group had had a sexually transmitted infection and they were more likely than a comparison group of sexually abused young people to have experienced violence and to have a history of drug use.A similarly high rate of STIs was found by Crawford and Kaufman who studied the case files of 20 sexually exploited adolescent females receiving post-trafficking NGO support in Nepal.Other studies have focused on the knowledge and perceptions of practitioners working with trafficked children and young people with the aim of improving identification and service provision for this group.Pearce completed focus groups and interviews with 72 UK practitioners and analysed 37 case studies from the files of a child trafficking advice and information service.She identified a ‘wall of silence’ constructed from children’s anxieties associated with talking about their experiences and practitioners’ lack of knowledge of indicators of trafficking or their disbelief of children’s accounts.Together, these made for difficulties in identifying and responding to trafficked children and young people.She found that practitioners were sometimes unable to distinguish between smuggling and trafficking and that there was potential for the sexual exploitation of trafficked boys to be overlooked.Ross et al.’s survey of 782 health professionals in England found that over half did not feel confident that they could make appropriate referrals for trafficked children.Eighty per cent of the sample considered that they had not received sufficient training to be able to assist individuals whom they suspected might be trafficked.Cole and Sprang’s study identifies the uneven nature of the response to trafficked young people.They completed a telephone survey with 289 professionals in metropolitan, micropolitan and rural areas in the US.While they found practitioners across all areas reported similarities in the situations of children and young people who had been trafficked for sexual exploitation, professionals in metropolitan areas were more likely to have experience of working with victims of sex trafficking, to have received appropriate training, be familiar with relevant legislation and to perceive it as a fairly or very serious problem.This mixed methods study was planned to provide an in-depth picture of the health needs and healthcare experiences of young people in England who had recently been trafficked from other countries.We also aimed to understand the challenges they faced in accessing health services through exploring both young people’s and professionals’ perceptions of the barriers and enablers to healthcare provision.The study was part of a larger programme of research which also included a survey of the health needs and experiences of trafficked adults as well as a two systematic reviews, analysis of the characteristics of trafficked adults and children in contact with secondary mental health services and a survey of healthcare professionals’ knowledge and attitudes towards human trafficking.We aimed to recruit trafficked young people aged 14–21 who were no longer in the setting where they had been exploited.Young people were identified through voluntary sector organisations providing post-trafficking support and children’s social services departments located in London and in the South, South East, West and North West of England.Support workers at participating organisations approached potentially eligible participants and provided them with basic information about the study; written information was available in multiple languages.Interviews were scheduled with assistance from support workers, with the verbal consent of potential participants.Written consent was obtained by researchers prior to conducting face-to-face interviews.Young people were interviewed either in their current accommodation or in agency premises.Professionally qualified, independent interpreters were used in nine of the 29 interviews and four young people chose to have their carer or support worker present during the interview.The length of interviews varied between 60 and 120 min depending on whether an interpreter was used, whether breaks were taken and the extent of the young person’s use of healthcare services.All participants were given a £20 high street shopping voucher to thank them for their participation and travel and childcare expenses were reimbursed.Attention has been given to protecting their confidentiality and anonymity and ethical approval for the study was provided by the National Research Ethics Service.Qualitative face to face interviews were carried out with health practitioners and professionals working outside the health sector, including civil servants, voluntary sector organisations, police officers, and members of the UK Human Trafficking Centre.These aimed to explore professional perceptions of the barriers and facilitators to trafficked people’s access to healthcare.Eligible professionals were identified with assistance from advisory groups and local collaborators and by snowballing.A purposive approach was taken to sampling with the aim of recruiting a variety of health professionals from relevant settings and a range of local and national stakeholders from welfare, legal and security services.Interviews were recorded and professionally transcribed.In this paper, we draw on data from seven of these interviews that addressed service provision for children and young people.The health survey was completed face-to-face which enabled the interviewer to provide explanations and reassurance when needed.Data were collected on socio-demographic factors, pre-trafficking and trafficking experiences including exploitation type, duration of exploitation, time since escape, living and working conditions and violence, with questions devised in line with other studies in this field.Medical history was assessed using questions from the 2007 English Adult Psychiatric Morbidity Survey.Physical symptoms were assessed using the Miller Abuse Physical Symptoms and Injury Survey.Severe symptoms were defined as symptoms which bothered the participant “quite a lot” or “extremely”.Questions adapted from the third UK National Survey of Sexual Attitudes and Lifestyles were used to ask about sexual and reproductive health.For participants aged 18 and above, probable depressive disorder was assessed as a score of 10 or more on the Patient Health Questionnaire-9, and probable anxiety disorder as a score of 10 or more on the Generalized Anxiety Disorder 7.For under-18s, psychological distress was assessed as scores above 5, 4, and 6 on the emotional difficulties, conduct difficulties, and hyperactivity subscales of the Strengths and Difficulties Questionnaire respectively, or probable PTSD.Participants were categorised as having high levels of psychological distress if they screened positive for one or more of probable depressive disorder, anxiety disorder or above threshold scores on the SDQ.Probable PTSD was assessed for both over- and under-18 s as a score of 3 or more on the 4 item version of the PTSD Checklist-Civilian.For all participants, suicidality was measured using the Revised Clinical Interview Schedule: participants who endorsed two or more items on the suicidality sub-section were categorised as suicidal.Completion of the health survey was immediately followed by a series of open questions exploring experiences of accessing and using health services.This part of the interview was audio-recorded with young people’s consent and professionally transcribed.A topic guide was developed with assistance from the study’s national advisory group, piloted with four participants and revised accordingly.Topics addressed included experiences of referring and assisting trafficked people to access healthcare; opportunities, barriers, feasibility and recommendations for coordination between the NHS and other aspects of the UK response to human trafficking; and relevant examples from their own practice.Interviews were conducted face to face, digitally recorded and transcribed verbatim.The software package STATA 11 was used to analyse the quantitative data.Descriptive statistics for continuous variables) were used to describe socio-demographic and trafficking characteristics and other variables of interest.The qualitative data from interviews with young people and from the interviews with professionals on access and use of health services were stored and sorted with the assistance of NVivo, and a Framework approach was adopted for the analysis with a coding frame that incorporated both questions from the interview schedule and emerging themes.Coding was discussed by research team members and the coding frames were refined and revised accordingly.In the event, we were not successful in recruiting any young people aged under 16.As is often the case when researching sensitive issues with a vulnerable population of children or young people, recruitment was a demanding process as local authorities lacked the systems that would enable them to search their records for trafficked young people, gatekeepers were anxious about requiring young people to retell their story and some young people were reluctant to participate in the study.Twenty-nine young people aged 16–21 who had been trafficked into the UK from other countries completed the health survey.Participants originated from twelve countries, including Nigeria, Albania, and Slovakia.The length of time that had elapsed since they exited that situation ranged from three weeks to six and a half years.The median length of time since leaving the trafficking situation was 12 months, so most were commenting on experiences that were relatively recent.Table 1 shows that five of the participants were male and all but one of the young men were in the 20–21 age group, while the young women were more evenly spread across the 16–21 age range.The age at which young people were trafficked was calculated from questions that asked how long they had been in the trafficking situation and how long they had been out of it.Over half the group were under 16, at the time they were trafficked, with the youngest age at which anyone was trafficked being six, and the oldest 20.The majority originated from African countries, with two-fifths of the young women from Nigeria.Nearly a third of the young women were Eastern European.Although eight young people had been married or promised in marriage, only one described herself as currently married.Three young women had children who were living with them in the UK at the time of the interview.Table 2 shows that three young people had no formal schooling, but most had continued their education into their teens.When asked about learning disabilities or difficulties reading in their own language, a third of the young women reported a disability or reading difficulties.Eleven of the young women and all but one of the young men described being hit, kicked or physically hurt prior to being trafficked.Eight of the young women said that they had been forced to have sex before they had been trafficked.Identified perpetrators included family member, recruiter, acquaintance and stranger.Despite the fact that most had received schooling in their teens, this group of young people appears to have had some key vulnerabilities which may have exposed them to trafficking.The length of time the young people had remained within the trafficking situation ranged from two weeks to eight years.The median duration of exploitation was 12 months.Although some young people experienced more than one type of exploitation, the main type of exploitation experienced while trafficked is shown in Table 3.Table 3 shows that sex work was by far the largest category of work into which young people were trafficked followed by domestic servitude.It is worth noting that two young men were exploited in sex work and domestic servitude.This reflects the gender balance in this group of respondents but it also highlights that, for the majority of this group of vulnerable young people, sexual exploitation and its consequences were central experiences.Sex work and domestic service are, of course, frequently characterised by informal and illegal working arrangements and are particularly resistant to inspection or regulation.Only two of the young people reported working eight hours or less a day.Thirteen said that they had no fixed hours.Only four had one or more rest days a week when working.This group had experienced high rates of violence and threats whilst trafficked.Twenty-four reported being physically hurt and 16 young women had received an injury.Eighteen young people, including all but two of the young women and two young men, had been forced to have sex.Sexual violence was experienced by both young women and by young men and by those trafficked for forced sex work and those exploited as domestic workers and in other labour sectors.Threats were reported by 26 of the 29 young people who stated they were physically threatened and eleven were threatened with harm to their family.Two-thirds said they were still scared of their traffickers.They also reported considerable restrictions of liberty and deprivation, as shown in Table 5.Fifteen young women and three young men had been confined in a locked room while seventeen had been denied access to their passport or identity documents.Thirteen had nowhere to sleep or had slept on the floor and 12 described sleeping in overcrowded conditions.Nearly half of the participants said that they had had no clean clothing and six described lacking basic hygiene facilities.Eleven said they did not have sufficient food while four reported insufficient water.A small number described being forced to drink alcohol or take drugs, and three were forced to take medication while trafficked.Pregnancy and sexual health conditions represented life-changing issues for the young women, and these were experienced by those trafficked into other settings as well as those trafficked for sexual exploitation.Five young women had become pregnant while trafficked: three of these were working in the sex industry; one was trafficked into domestic servitude, and one for labour exploitation.Two had an abortion, two were currently pregnant and one had given birth and had her child with her.None had seen a midwife whilst they were in the trafficking situation.A further two young women had had children since leaving the trafficking situation.Four young women reported having been previously diagnosed with sexually transmitted infections and two had been diagnosed with HIV.This last group included young women trafficked for sexual exploitation, domestic servitude and labour exploitation.Table 6 shows that over half the young people described being bothered by headaches in the last four weeks and nine reported memory problems.Seven had been worried by stomach pains and seven young women noted back pain, including one of the three young women who were pregnant at the time of interview.Six young women had experienced dental pain.Table 6 indicates that mental health disorders were found to be at a high level in this group of young people.Two-thirds screened positive for probable disorders, including PTSD.Over half the group had PTSD symptoms.The qualitative data collected after the survey was completed included accounts of feeling overwhelmed by memories of abusive experiences:… if there was anything I could do just to clear the memory, I would do it.Just erase everything…all of it.Twelve reported suicidal thinking in the last week.Two young people described recently attempting suicide, one whilst in detention.Both described difficulties in coping with the overwhelming feelings they were experiencing:I tried suicide….Some people can’t open up and talk and then a doctor who is a professional … you can talk and you can open up.When we examined the relationship between probable mental disorder and type of exploitation, we found that 15 of the 18 who had been sexually exploited, all those who had been trafficked into domestic servitude, and two of the three trafficked into labour exploitation scored positive for a probable mental disorder.There was considerable overlap between prior vulnerability and mental disorder with five of the eight who had been forced to have sex prior to being trafficked reporting symptoms indicative of a mental disorder, and seven of the eight who had difficulty with reading in their own language or a learning difficulty also reported symptom levels associated with a disorder.The small numbers involved here means it is inappropriate to measure p values but these findings suggest avenues for future research.The young people interviewed described a range of barriers to utilising health services.While they were in the trafficking situation, their access to health professionals and freedom to make decisions about healthcare was frequently restricted by traffickers.One young woman who had an abortion arranged by her traffickers said that she was encouraged by a nurse to approach the authorities and explain her situation but she feared for her safety if she followed this advice.Another young woman trafficked for sex work described how she was too frightened to explain her situation honestly to a sexual health worker:They ask me why are you doing this, do you like doing this?,I say ‘yes’ because I was scared.However, even when they had escaped from their traffickers, complex gatekeeping systems seemed to impede or delay their access to and use of health services; registration with general practitioners or family doctors was described as particularly difficult:It wasn’t easy, because my friend tried many times before to register me with GP because it was kind of an emergency.I needed to see a doctor because I was pregnant, but the GP wouldn’t register me without any papers from the Home Office, so we had to wait until that paper arrived and then I was registered.Language barriers appeared to exacerbate the challenges of dealing with complex and unfamiliar systems and organisations and the absence or limited availability of interpreters or reliance on telephone interpreting systems could make for difficulties in communicating directly with health professionals.Some young people expressed a preference for face-to-face interpreting services; for example, a young man who had experience of a telephone interpreting service remarked that “maybe it would be better if the interpreter came in person”.Some young people interviewed reported feeling as if they were not listened to or believed or taken seriously by health professionals.As noted by one young woman who had to make repeated visits to her family doctor:It’s good to listen to children …Check then what they say…when I had the, the pain in my throat, he didn’t give me medicine.A few young people emphasised the importance of being able to make choices about their healthcare, including being able to request a female health practitioner.Some young women had found staff working in maternity services to be particularly helpful, and appreciated staff continuity and the opportunities this offered for developing trusting relationships.This young woman was very positive about her first experience of UK healthcare; she was admitted to hospital as an emergency very late in her pregnancy and found midwives attentive and reassuring:Because they was there with me when I need them.,Support workers from relevant organisations, foster carers and others, including friends, were described as playing a key role in advocating for young people’s health needs and assisting them to navigate services.One young woman described how her support worker would break down the health professional’s communication into comprehensible messages for her: she “put it in pieces for me so I will understand.,.Young people, who could remain fearful of traffickers even after their escape, wanted reassurances about confidentiality in their contacts with health services.They wanted to be given time to explain their needs and for their accounts to be respected:The most important thing is to ask, and to give you time to explain how you are feeling instead of just assuming what is wrong, giving you the chance to explain, and listening to your opinion about why you feel like that.Whilst some, like the young woman quoted above, wanted to talk about their experiences, for others, talking evoked too many distressing thoughts.One young person with mental health problems explained that she did not want to discuss her past experiences of trafficking with her psychiatrist, but instead wanted to look ahead:I want to forget what happened.I just want to move on.I just want to get my own flat and live and maybe get a job.Another young woman’s desire to forget appeared to be linked to having been asked to repeat her story on numerous occasions to different people:I’ve said my, everything I know, to police, to social worker, to social services, so… I’ve gone through a lot of things, so I don’t think I can say anything more, much like that anymore.I need to forget… I don’t want to remember them anymore.For some young people, mental health or counselling services may need to be made available at a later date when they feel they need them or are more ready to participate.This requires repeated offers of such support, even if it is initially declined.Seven of the 52 professionals interviewed had experience of providing services for children and young people.Two were health professionals; two worked in specialist sexual health services and two worked in non-governmental organisations that offered advice and support services to trafficked children and young people.Six were female and one was male.Some key themes identified in these interviews reiterated and reinforced findings from the health survey.In discussing the challenges involved in identifying the health needs of trafficked people, interviewees noted that a substantial proportion of trafficked young people were themselves parents and that this was often a consequence of their exploitation:… I probably see about a third of people that actually have children from their exploitation.And …a couple of them actually had children in the exploitation .They didn’t actually have the child in hospital.So that’s caused some health problems.A health practitioner had encountered a number of large families where parents had been trafficked into the country and had brought their children with them:The example of a whole family who’s been trafficked and actually the father is the one who’s been trafficked to work, with the promise of a better life and better job for his family…he brings the whole family with him and…they all end up in some way being trafficked and exploited.Interviewees agreed that sensitivity, attention to confidentiality and continuity of staff were needed from health professionals when engaging with trafficked children and young people:…if I need them to go to a GUM clinic … I would always ring up and arrange a special appointment for them, so they’re seen by a senior doctor, they’re not sitting in with everybody else waiting in the waiting room.The best outcome is that they, you know have access to a GP and that they get to see the same person every time they go, and that a relationship is built up between them… if there has been sexual abuse… That might be something that they might talk to a doctor about, if they had a chance to kind of create a bit of familiarity and a bit of trust.However, NGO practitioners did not consider that health professionals consistently provided a sensitive and responsive service.One interviewee noted that maternity staff had failed to make use of interpreters when working with a young woman who spoke no English.The other reported that health practitioners rarely made direct referrals to relevant specialist support services but rather relied on children’s social work services to access those services for children and young people.This could result in lengthy and convoluted referral routes whereby young people entered out-of-home care and were then registered with a GP who was expected to refer them to specialist services.This NGO practitioner argued that from the perspective of healthcare practitioners: “They’re seen first and foremost as an immigrant, and then as a young person with potential health needs.,.The professionals interviewed concurred with the data showing high levels and prevalence of mental health need among trafficked children and young people.While they attributed much of this to the trauma and violence experienced when trafficked, they noted that post-trafficking isolation, poor living conditions and lack of support could also contribute to the development of mental health disorders.The age assessments undertaken by social workers to ascertain if young people were under 18 and entitled to health, education and social work services were identified as a particular source of stress and were perceived to contribute to the erosion of trust of statutory services:It causes them a lot of stress…and also this, sort of, opinion of not believed.Some talk about that…no-one believes them… It causes impact on them feeling valued, and it knocks their confidence a lot…they do often talk about having to tell their story again and again and again and no-one believing them and they’ve said the same thing lots of times and why aren’t people believing them.Practitioners agreed that, while some trafficked young people wanted and were keen to use mental health or counselling services, others needed to ‘move on’, whilst others needed to wait until they were ready to use this type of support.However, they were often not offered repeat opportunities to access these services:I think a lot of the time a child will say, or a young person will say, ‘I don’t want counselling now, it’s not for me,’ and then it’s like, ‘Oh, well they’ve refused counselling,’ and it will never be readdressed.Both the NGO practitioners and health professionals noted the ‘huge gap’ in availability of mental health services for this group and described long waiting lists with services being “completely oversubscribed: throughout the country, there are not enough mental health services.,.The majority of young people participating in this study had experienced serious physical harm or threats whilst trafficked as well as some form of restriction of liberty and/or deprivation.In addition, sexual violence was a widespread experience among this group and this was not confined to those working in the sex industry, but was also experienced by young women trafficked into domestic servitude and by young men.This indicates how important it is that health professionals assess for sexual violence and address the sexual health needs of all young trafficking survivors, regardless of gender or type of exploitation.The high level of mental health needs among the trafficked young people surveyed was striking but is consistent with the findings of Kiss et al.’s Mekong study.There was an indication that prior vulnerability in the form of learning difficulties or earlier experiences of sexual abuse might be related to vulnerability to being trafficked and a study of the health needs of women and adolescents trafficked into Europe highlights that pre-trafficking experiences of violence and abuse can make women a target for traffickers as well as possibly contributing to their motivation to leave home.A history of abuse may also increase susceptibility to physical illness and mental health problems.Addressing young people’s mental health needs will be a priority for planning service provision.However, professionals interviewed expressed concerns about the availability of children’s mental health services in a climate where service thresholds are increasingly high, so that only those children and young people whose mental health needs have reached crisis point receive a service; this feature of the service landscape has been highlighted by other research.It was notable that three of the young women participating in this study were mothers; their mental health needs might have implications for their parenting.Reconceptualising trafficked people as parents provokes consideration of the impact of the trauma experienced on their parenting and on their children’s development.Offering appropriate health services and support to facilitate access to and take up of those services for trafficked young people may represent a means of intervening early in the lives of families whose histories are marked by exploitation and violence.Over half of the adults participating in the study of adult trafficked people included in the same research programme were found to be parents but little is as yet known about how trafficking-related trauma might impact on parenting and on the children of trafficked people.Professionals’ attitudes also appeared to influence the delivery of healthcare to this group.Confidentiality and sensitivity from those delivering services are key and some groups of health practitioners, such as those working in sexual health services, may be more attuned to the need for such approaches than others.Austerity policies have resulted in increased restrictions on access to health and other public services for those from outside the UK.A climate where health providers are being asked to function as gatekeepers to deny access or collect charges for healthcare from migrants can create confusion and uncertainty regarding rights to receive services for health staff dealing with trafficked young people.When services were accessed, some young people felt that health professionals did not take them seriously and practitioners interviewed suggested that trafficked young people’s experiences of age assessments undertaken by children’s social services contributed to their sense that they lack credibility.The damaging impact of these assessments on young people’s relationships with professionals has been documented by other studies.Healthcare staff need relevant training to enable them to ask appropriate questions in a sensitive and respectful manner.As Pearce notes, trafficked young people have often undergone a rapid transition to adulthood as a consequence of enforced separation from family and experiences of war and trauma, they are used to making difficult decisions on their own and their capacity for making choices needs to be acknowledged.However, it is also important that services such as counselling or psychiatry which may not be appropriate or acceptable in the near aftermath of trauma are offered again when survivors are more prepared to engage in these types of support.Kohli and Mather’s study of young refugees in the UK suggested that “young people want to face the present first, the future next and the past last.,The young people interviewed had also encountered considerable barriers to accessing health services, and interviews with practitioners suggest that health professionals need to be better informed about and prepared to refer children and young people directly to specialist services.In the UK, children and young people who have been trafficked are defined as in need of child protection services and this means that health professionals will refer those under 18 directly to social workers.Whilst this procedure has resulted in clearer referral pathways for trafficked children and young people than are available for adults, it may also have contributed to a mindset whereby trafficked children and young people are considered ‘someone else’s business’.At present, some trafficked young people have to experience unnecessarily long and complex routes to specialist services.Support workers from specialist voluntary organisations and others such as foster carers emerged as playing a crucial role in assisting young people to access health services.Clearly, health professionals need to ascertain the identity of such people to ensure that they are not traffickers but their contribution emerged as key in ensuring good communication between health professionals and anxious or fearful young people.Both young people themselves and professionals interviewed emphasised the value of support and advocacy for these young people being delivered in the context of a trusting relationship built over time with an identified individual.A scheme to provide legal guardians who could fulfil this role for trafficked children and young people has been successfully piloted in Scotland and is, at the time of writing, being tested in England; it was implemented in Northern Ireland in 2015.Most adolescents need guidance and assistance to access healthcare services and trafficked young people are particularly disadvantaged in this respect by their lack of familiarity with local healthcare systems, language barriers, frequent changes of address, high levels of trauma and mental health needs, ongoing fear of traffickers and confusion surrounding their legal status and entitlement to free healthcare in the UK.Crawley and Kohli’s evaluation of the Scottish Guardianship pilot includes accounts of Guardians assisting young people to understand the roles of different health providers, providing encouragement and support to access mental health services, accompanying them to specialist appointments and reinforcing and contributing to ongoing interventions.Finally, while this study was able to provide important evidence on the health needs of child trafficking survivors, this mixed methods study has some limitations.We have no means of knowing how representative our sample of trafficked young people was and we were unsuccessful in attempts to recruit young people under 16.However, this is, to our knowledge, the largest study of trafficked young people in a high income country to investigate directly young people’s health experiences and perceptions of services.We have also been able to draw together different data sources to address the question of what might constitute an appropriate response from healthcare services.This study found that young people who survive extreme forms of exploitation are often in need of urgent, as well as ongoing healthcare, especially mental health support.Yet there appear to be many barriers associated with accessing and using services, as well as challenges for the professionals providing care.At the time of writing, Europe is experiencing an influx of unaccompanied children on a scale that has not been seen since World War II and concerns about their vulnerability to trafficking were articulated in a debate held at ISPCAN’s 2015 European Regional Conference in Bucharest.There is substantial reason to believe that health services in the UK will be encountering increased numbers of highly exploited children among these unaccompanied minors.Health services should be prepared to meet the needs of these children and young people who, in escaping violence and oppression at home, may be trafficked for exploitation in the sex industries, private households, industries and agriculture of high income countries.Policy makers need to work towards finding better ways to help some of the most vulnerable young people to access the healthcare they clearly require.In addition to the very clear health arguments to be made in favour of doing the utmost to provide much-needed healthcare to children and young people, there are also humanitarian and moral arguments currently being voiced in the UK and across Europe.The knowledge base in respect of the health needs of trafficked young people requires further development.Research is particularly needed to establish which interventions are effective to address children’s and young people’s various mental health needs, especially their longer term outcomes and what might contribute to their coping and resilience.
Young people who have been trafficked may have experienced significant trauma and violence but little is known about their health and healthcare needs. This UK study aimed to address that gap. It included a health survey and qualitative interviews with 29 young people aged 16–21 trafficked into the UK from other countries who were recruited through voluntary organisations and children's social services. These data were supplemented by interviews with relevant professionals. Over half the young people had been trafficked for sex work but sexual violence had also been experienced by those trafficked for domestic servitude and labour exploitation. Physical violence, threats, restrictions of liberty and deprivation were also widespread, as were experiences of physical and sexual violence prior to being trafficked. Five young women had become pregnant whilst trafficked; three were parents when interviewed. Two-thirds screened positive for high levels of psychological distress, including PTSD. Twelve reported suicidal thinking. Whilst some were keen for opportunities to talk to health professionals confidentially and wanted practitioners to treat their accounts as credible, others wanted to forget abusive experiences. Complex gatekeeping systems, language barriers and practitioners who failed to take them seriously limited access to healthcare. Support and advocacy were helpful in assisting these young people to navigate healthcare systems. Health professionals need to recognise and respond appropriately to trafficked young people's often complex mental health needs and refer them to relevant services, as well as facilitating care at later times when they might need support or be more ready to receive help.
465
Pilot scale steam-oxygen CFB gasification of commercial torrefied wood pellets. The effect of torrefaction on the gasification performance
Biomass is considered as a potentially carbon neutral energy source.However, due to its price, moisture content, heterogeneous composition and cost of logistics, it is not yet ideal for many thermal conversion applications.Therefore, efforts are being made to develop upgrading processes which convert biomass into a fuel with superior properties in terms of logistics and end-use.Torrefaction is a thermochemical process, carried out in an oxygen-deficient atmosphere at typically 230–300 °C.During torrefaction the biomass becomes more coal alike; its energy density increases, it becomes more hydrophobic, more brittle and its O/C and H/C molar ratios decrease.Furthermore, if torrefaction is combined with a densification step, the energy density increases on a volumetric basis and its logistics and handling operations are improved .In addition, life cycle assessment studies have shown that torrefied wood offers environmental benefits in global warming impact when it is used for energy applications, such as co-firing with coal for electricity generation and transportation fuels production .Various types of gasification exist based on the applied reactor type.Fluidized bed gasification is a technology which shows benefits in feedstock flexibility and scale-up opportunities.In their handbook on gasification, Roracher et al. describe that there are various operational fluidized bed gasification plants globally; such as large scale coal and biomass plants with capacities up to the order of magnitude of 100 MWth output.The gasifier product gas is fired in lime kilns or dedicated boilers, or it is co-fired with coal for power generation or CHP.The characteristics of the fluidized bed gasification of biomass have been studied extensively using smaller scale facilities.In these experimental studies, the focus was put mainly on the cold gas efficiency, the carbon conversion efficiency, the permanent gas composition and the tar content .So far, only limited studies have investigated the effect of torrefaction on permanent gas composition and tar content during fluidized bed gasification of biomass.Furthermore, these studies were restricted to bubbling fluidized bed gasification and the feedstocks used were torrefied on a small scale by the researchers themselves, except for the study by Kulkarni et al. who acquired their feedstock from the American company, New Biomass Energy, LLC.In general, these authors concluded that torrefaction did not have a positive influence on gasification performance, with respect to CCE and CGE.In addition, they reported a limited effect on permanent gas composition and a reduction of the total tar content.Among these studies, only Kwapinska et al. reported deviating results regarding the effect of torrefaction on the H2 content and on the total tar content.Berrueco et al. performed lab-scale steam-oxygen gasification of Norwegian spruce and forest residues at 850 °C.They reported that increasing the torrefaction temperature from 225 to 275 °C resulted in a marginal increase of the H2 and CO contents and a decrease of the total tar content, up to 85% and 66% for forest residues and spruce, respectively.Furthermore, they presented that due to torrefaction the char and gas yields increased; whereas, the CGE did not show a clear trend.Sweeney performed steam gasification of wood at 788 °C but without mentioning the conditions of torrefaction.The author reported the same effects of increasing torrefaction severity as Berrueco et al. with respect to the H2 content and tar content.On the other hand, Sweeney reported a reduction in both CCE and CGE due to torrefaction.Woytiuk et al. performed steam-air gasification at 900 °C of willow and torrefied willow at four different temperatures.These authors reported that increasing torrefaction temperature resulted in an increase of the H2 content and a decrease of the tar content by 47%, when the torrefaction temperature reached or exceeded 260 °C.In contrast with studies mentioned above, the CO content remained unaffected.Kulkarni et al. performed air-blown gasification of pine wood at 935 °C."These authors do not report the torrefaction conditions; they concluded that torrefaction led to a decrease in CGE and to minor changes in product gas constituents' compositions, the H2 content increased and the CO content decreased.Lastly, Kwapinska et al. performed air-blown gasification of miscanthus × giganteus at 850 °C.However, due to the fact that the miscanthus is not a woody type of biomass, their findings are not included in this study.As presented above, there has been limited and, in several aspects, contradictory research on the effect of torrefaction on the permanent gas composition, CCE, CGE and tar content during fluidized bed gasification of biomass.Furthermore, so far only one publication has considered commercially produced torrefied wood and no studies have evaluated the effect of heavily torrefied conditions in wood gasification.No research has been carried out, to our best knowledge, on the impact of torrefaction on the steam-oxygen circulating fluidized bed gasification of wood.Thus, the goal of this study is to investigate the influence of torrefaction on permanent gas composition, tar content, CCE and CGE during steam-oxygen circulating fluidized bed gasification of commercial torrefied wood.The experimental facility at TU Delft consisted of a 100 kWth circulating fluidized bed gasifier followed by a woven ceramic four-candle filter unit operating at 450 °C, and equipped with a gas supply system, a solids supply system and analytical equipment.A schematic of the experimental rig is presented in Fig. 1.Detailed information on the experimental rig has been described elsewhere .Gas and tar were sampled at different locations in the rig.The gas was sampled from the G.A. point downstream the riser and analyzed on-line using a Varian μ-GC CP-4900 equipped with two modules, which measured the volumetric concentration of CO, H2, CH4, CO2 and N2 and benzene, toluene and xylenes, also coded as BTX.The gas composition data from the μ-GC are obtained in intervals of 3 min.In addition, an NDIR analyzer monitors CO2 and CO and a paramagnetic analyzer measures the oxygen concentration with a time interval of 2s.The water content in the product gas was analyzed via sampling a measured flowrate of product gas for a determined timeframe.The gas was cooled in a condenser immersed in a mixture of ice, water and salt.The weight of the condenser was measured at the beginning and at the end of the test.The tar content of the product gas was sampled from the T.P. point downstream the BWF filter according to the tar standard method.The tar samples were analyzed using an HPLC equipped with a UV and fluorescence detector, and a reverse phase column.20 μL of filtered sample were injected in the column and a gradient elution with methanol – water was performed for 50 min.The UV detector was set at 254 nm.The quantification was performed by external calibration using triplicate data point and, using standard tar compounds in an appropriate concentration range.All coefficients of determination exceeded 0.990.Four samples of biomass feedstock were tested, two commercial torrefied woods and their parent materials; all samples were in pellet form.Two Dutch companies supplied the fuels, Torr®Coal International B.V. and Topell Energy B.V. Topell torrefied pellets consisted of forestry residues torrefied at 250 °C for a less than 5 min with the Torbed® technology, which utilizes a heat carrying medium, blown at high velocities through the bed bottom to acquire a high heat transfer.The Topell black pellets had an outer diameter of 8 mm and a length of approximately 2 cm, and untreated Topell pellets had an outer diameter of 6 mm and a length of approximately 2 cm.The Torrcoal torrefied pellets consisted of mixed wood, i.e. coniferous and deciduous wood, and residues from Dutch, Belgian and German forests, which were torrefied at 300 °C for less than 10 min in a rotary drum reactor.Both Torrcoal black pellets and untreated Torrcoal wood pellets had an outer diameter of 6 mm and a length of approximately 2 cm.The elemental analysis, proximate analysis and torrefaction degree of the samples are presented in Table 1.The latter was calculated based on the anhydrous weight loss or the reduction of the volatile content upon torrefaction divided by the initial volatile content on an a dry basis."The elemental composition of all feedstocks has been analyzed at the University of L'Aquila, Italy, with a PerkinElmer Series 2 CHNS/O 2400 analyzer.The proximate analysis was performed via thermogravimetric analysis at the Technical University of Delft.For this purpose a Thermal Advantage SDT Q600 thermogravimetric analyzer was used.Detailed information on the TGA procedure has been described elsewhere .Based on the elemental analysis data of the feedstock samples and based on the data for various fuels obtained from the Phyllis2 online database , a Van Krevelen diagram was drawn that shows the changes in the woody feedstocks due to torrefaction.It is confirmed that torrefaction decreased the O/C and H/C ratios for both wood feedstocks and, even though Topell white and Torrcoal white have approximately spot in the diagram, the higher torrefaction temperature for the Torrcoal black feedstock resulted in lowering both ratios more than for the Topell black feedstock.Calcined magnesite was used as the bed material in this paper.Calcined magnesite is a mineral consisting mainly of MgO and smaller fractions of Fe2O3, CaO, and silica.Detailed information regarding the constituents, price and particle size distribution of the bed material can be found in a previous study from our group .The gasification experiments were performed at approximately 805–852 °C and atmospheric pressure.The experiments were carried out varying the equivalence ratio and the steam to biomass ratio as presented in Table 2.All the results presented in this paper are measured during representative steady state time frames.A typical dry gas composition over time graph during steady state operation of the gasifier is presented in Fig. 3.The permanent gas and the tar species concentrations are presented on a dry and nitrogen-free basis.The CO, CO2, H2, CH4, and BTX contents presented are the average values during the steady state operation.Moreover, the standard deviations of these gas species are presented.On the other hand, the moisture content of the product gas is presented on a wet basis.For water, no standard deviation value is presented due to the nature of the measurement method used.As described above, during the steady state only one measurement for quantification of the water content was performed.The tar yield is presented on a dry ash-free basis of supplied feedstock.Finally, key performance indicators based on mass balance calculations are reported.The four samples were characterized concerning their slow devolatilization behavior in a N2-atmosphere.The changes in mass loss rate versus temperature curves, as presented in Fig. 4, are generally reported to be due to changes in chemical composition during torrefaction.For both torrefied feedstocks, the “shoulder” on the left side of the peak has disappeared, which is generally attributed to the conversion of the hemicellulose fraction in lignocellulosic biomass feedstock .As a consequence, both torrefied feedstocks are expected to contain higher lignin and cellulose contents than their parent materials.Demirbas reported that a higher lignin content results in a higher fixed carbon content, which was found for both feedstocks in this study as well.Torrefaction had an impact on the product gas composition for Topell and Torrcoal feedstocks, as shown in Figs. 5 and 6.Torrefaction resulted in a decrease in CO2, an increase of CO, a minimal increase of H2 and a minimal decrease of CH4.The change of each permanent gas species cannot be discussed in isolation from the others due to the chemical reactions taking place in the gasifier simultaneously.The decrease of the CO2 is attributed to the torrefaction conditions, as the CO2 is the gas that is released in larger amounts at low temperatures due to hemicellulose devolatilization .On the other hand, main sources for the release of CO are cellulose and lignin, as reported by Wu et al. .In addition, as torrefaction results in lowering the volatile content and the H content of the fuel, the slight increase in the H2 content in Topell black and Torrcoal black experiments was not expected.This increase can be attributed to steam reforming reactions; due to the higher fixed carbon content of the torrefied material more char is available to react with steam under our process conditions.Lastly, the water content of the product gas is presented in the graphs.The water content in the product gas during Torrcoal black and Topell black experiments was lower than the parent materials.As the water measurement is not considered the most accurate, the modified SBR∗ value was calculated, which consists of the total water ratio to dry biomass input, to investigate whether the different moisture contents of untreated and torrefied material influence this observation.It is found that the SBR∗ is the same among the Topell feedstocks and slightly different between the Torrcoal feedstocks, 0.98 and 0.95 for Torrcoal white and Torrcoal black, respectively."Both feedstocks' results are mostly in agreement with literature.Several authors gasified wood that was torrefied at conditions relevant to Topell black .However, even though the effect of torrefaction on the H2 and CH4 contents is the same, contradictions exist for the CO and CO2 contents.These differences for the CO and CO2 behaviors exist due to the different gasification conditions.For example, Berrueco et al. who performed experiments with the most relevant conditions compared to this study, reported the same effect like us in CO, H2 and CH4 contents, but not for the CO2 content.This reduction in the CO2 content in our study may be due to a higher activity of the Boudouard reaction with torrefied feedstocks because of the higher availability of carbon in the torrefied feedstock.In addition, the lower volatile matter content of the torrefied biomass is expected to result in a lower primary tars formation.The latter would permit a lower steam demand for reforming of the hydrocarbons and, thus, a higher steam availability for the water-gas-shift and char gasification reactions.The variability in the ER and SBR values in the Topell black experiments resulted in changes in the H2 and CO2 contents, as expected.Increasing the SBR and decreasing the ER resulted in increasing the H2 content in the product gas.On the other hand, the CO content remained the same.The latter may be attributed to the WGS reaction which worked as a stabilizing factor, if SBR increased and ER decreased, part of produced CO may react with the extra steam to produce H2 and CO2.In addition, Topell black and Torrcoal black have been gasified using the same ER and SBR values.The limited differences in product gas composition are attributed to differences in wood origins and in torrefaction conditions.Based on the μ-GC analysis of the product gas, torrefaction generally resulted in a reduced BTX content.According to Yu et al. , who studied tar formation of all three individual biomass components, i.e. cellulose, hemicellulose and lignin, BTX originates primarily from hemicellulose and cellulose, and secondly from lignin.As torrefaction leads to a decrease in the hemicellulose content, a reduction in the BTX was to be expected.Moreover, the reduction is larger for Torrcoal black which is torrefied at higher temperature than Topell black, indicating a larger decrease in the hemicellulose content for the Torrcoal black.The most affected BTX species is the benzene for both feedstocks.Torrefaction resulted in a reduction in the total tar content in the product gas for both feedstocks.For each tar compound, Torrcoal white resulted in higher concentrations than Topell white, although under different gasification conditions.Torrefaction resulted in a larger reduction of the total tar content for Torrcoal black.Moreover, all the tar compounds concentrations decreased in the Torrcoal black experiments, while for the Topell black all the tar compounds lighter than ethylbenzene decreased.This reduction in the total tar content during fluidized bed gasification due to torrefaction has been reported before in literature .As torrefaction decreases the volatile matter content of the feedstock, a lower amount of primary tars is released in the devolatilization step in the gasifier.As a consequence, a lower amount of secondary and tertiary tars may be expected as well.Therefore, a more severe torrefaction, as in case of Torrcoal black, will lead to a larger reduction in volatile matter content and, therefore, in less tar formation.Since the total tar content was affected by torrefaction, the individual tar classes were affected as well.For Topell black, Class 3 and Class 4 tars decreased by 37% and 26%, respectively.Class 5 tars shows a slight, but not significant increase, from 0.14 to 0.18 g.Nm−3.The total tar concentration reductions and the total tar yield reduction were approximately 30% and 40%, respectively.Class 3 tars were decreased mainly due to a decrease in toluene.The decrease in Classes 3, 4 and 5 was much larger for Torrcoal black; it was approximately 50%, 61% and 82%, respectively.Class 3 tars decreased 3.0 to 1.5 g.Nm−3, Class 4 tars decreased from 3.2 to 1.2 g.Nm−3 and Class 5 tars decreased from 0.5 to 0.1 g.Nm−3.This large reduction in the total tar content and total tar yield derived from the reduction of toluene and naphthalene, which decreased more than 40%.Lastly, Class 2 tars totally converted.This decrease in phenol content was not expected as Torrcoal black is expected to contain more lignin than Torrcoal white.However, it can be explained, as it is reported before that the presence of H2 in the product gas enhanced significantly the hydrodeoxygenation of the oxygenated tar compounds .The simultaneous increase in ER and decrease in SBR resulted for Topell black in no significant changes in total tar concentration and yield.However, small changes did occur in almost all the individual tar compounds.The combined increase in ER and decrease in SBR resulted in converting the phenol.The latter is among the reasons why, the relative fraction of Class 3 tars increased, whereas, the Class 4 tars relative fraction decreased.Based on mass balance calculations, various process key performance indicators were calculated, such as CCE, CGE, molar ratio of H2/CO, gas yield, etc.For the Torrcoal samples, torrefaction resulted in a decrease in CCE, while the opposite was observed for the Topell samples.While the former was expected as a result of the lower volatile matter content , the latter was not.As described before the feeding system consisted of a screw feeder which also grinds the biomass pellets during operation.By disconnecting the feeder from the gasifier and collecting and analysing the material downstream the feeding screw, it was found that the average particle size of Topell black was significantly smaller than the average particle size of Topell white.Apparently, this was due a more severe grinding caused by the larger diameter of the Topell black pellets in combination with the increased brittleness resulting from the torrefaction.Siedlecki and de Jong have reported that a smaller particle size will lead to a higher burnout rate and tar yield, which it did.Related to the increased CCE and CGE, the LHV of the gas increased as well.In addition, it was also checked if the deviating result for the Topell samples could be explained by the sub-optimal recirculation conditions or the high absolute pressure in the riser in the Topell white experiments.Therefore, the differential pressures measurements of the reactor were checked, but this was not the case.For the Torrcoal samples, the particle size distribution after the feeder was not determined.Because of the more severe torrefaction conditions leading to an even further increased brittleness, one might expect even smaller particle sizes for Torrcoal black than for Topell black.However, due to the smaller pellet size, the grinding effect of the feeder was probably much smaller.For the Torrcoal samples, torrefaction led to a decrease in CCE, but the CGE remained the same.The latter can be attributed to the increase of the H2 and CO contents.Finally, for both feedstocks torrefaction resulted in an increase in the gas yield as reported before .Torrefaction, when combined with a densification step, offers benefits in logistics and handling operations.Therefore, in this study, steam-oxygen blown circulating fluidized bed gasification experiments at 850 °C have been performed with commercial torrefied woods and their parent materials in order to investigate the impact of torrefaction under our conditions.The examined operational conditions were relevant to typical operating conditions in practical applications.It is concluded that torrefaction affected the gasification performance of both woody feedstocks the same way with respect to the permanent gas composition, gas yield and total tar content, but in different ways regarding the CCE and CGE.Torrefaction resulted in an increased gas quality, as it yielded higher H2 and CO contents, a decrease of the CO2 content, and a significant decrease of the total tar content.For Topell black, the decrease in the tar content concerned Class 3 and 4 tars, whereas, for Torrcoal black this decrease was larger and it concerned all tar classes.Moreover, in both cases torrefaction resulted in an increased gas yield in the gasifier.For the Torrcoal samples, torrefaction resulted in a decrease in CCE as expected based on the decrease in volatile matter content.The CGE remained approximately constant due to an increase in H2 and CO content in the product gas.The Topell samples showed an increase in CCE and CGE upon torrefaction, which could be attributed to a significant grinding in the screw feeder.In addition to the benefits of torrefaction in logistics and handling, it is generally concluded that both torrefied fuels may offer benefits as a feedstock for steam-oxygen blown circulating fluidized bed gasification, in particular in terms of gas quality and yield.
Torrefaction is a promising biomass upgrading technology as it makes biomass more coal alike and offers benefits in logistics and handling operations. Gasification is an attractive thermochemical conversion technology due to its flexibility in the product gas end-uses. Therefore, it is valuable to investigate whether additional benefits are foreseen when torrefaction is coupled with gasification. Therefore, two commercial torrefied wood fuels and their parent materials are gasified at 800–850 °C under atmospheric steam-oxygen circulating fluidized bed gasification conditions and magnesite as bed material. The torrefied feedstocks consisted of wood residues torrefied by Topell at 250 °C (Topell black), and mixed wood and wood residues torrefied by Torrcoal at 300 °C (Torrcoal black). The gasification results show that torrefaction resulted in an increased gas quality, as it yielded higher H2 and CO contents, a decrease of the CO2 content, increased gas yield and a significant decrease of the total tar content for both feedstocks. For the Torrcoal samples, torrefaction resulted in a decrease in the carbon conversion efficiency (CCE). In addition, the cold gas efficiency (CGE) remained approximately the same due to the increase in the H2 and CO contents. The Topell samples showed an increase in the CCE and CGE upon torrefaction, but this could be attributed to a significant grinding in the screw feeder. It is generally concluded that both torrefied fuels may offer benefits as a feedstock for steam-oxygen blown circulating fluidized bed gasification, in particular in terms of gas quality and yield.
466
Donald Trump's grammar of persuasion in his speech
This paper is an investigation into the nature of propositions made in President Trump's speech on Jerusalem delivered on December 6, 2017, from the perspective of Systemic Functional Linguistics.The speech was a persuasive yet controversial one.It was persuasive since he used it to persuade the audience to agree with his decision.More specifically, this speech can be categorized as an analytical exposition whose purpose is to persuade that something is the case."The speech was also controversial because it had received a positive response from the President's supporters but got adverse reactions from his rivals which strengthened tensions across the Middle East.Because of its persuasive and controversial values, the speech is worth analyzing.The primary purpose of a persuasive speech is to get the audience convinced and persuaded about the subject matter of the speech."The grammatical analysis on a persuasive speech is worthwhile to do because one's techniques of persuading the audience can be seen from the grammatical choices the person uses, as grammar is vital in persuasion.The grammatical viewpoint was chosen to analyze the rhetorical device of ethos, pathos, and logos because most of the previous studies concerned with the discussion of the power words, pragmatics, social, even psychological aspects of the devices."This current research carries out a lexico-grammatical analysis on the main clauses in President Trump's persuasive yet controversial speech with the aim of identifying how interpersonal relationships are created between the speaker and the audience and how the systems of mood are used to build the ethos, pathos, and logos.Wardhaugh explained that “a change of topic requires a change in the language used” which implies that different purposes of speech will require a different linguistic strategy to apply.For example, to have someone to take medicine will likely need different linguistic strategies than to have the same person to have ice cream.It would be interesting, therefore, to analyze how the President built his ethos, pathos, and logos clauses in his political speech.The three elements are essential for persuading the audience on a given topic in a persuasive speech.One of the factors contributing to the successful building of the elements lies in the mood system being used.It means that the right mood system to apply will affect the persuasiveness of the message the sender intends to deliver.The result of the analysis is to give a comprehensive view of how ethos, pathos, and logos were built structurally in a persuasive, yet provocative speech.The words ethos, pathos, and logos come from the Greek language."Ethos is used by a speaker to convince the audience about the speaker's credibility or character.Ethos means “character”.By the ethos, a speaker demonstrates that he is a trustworthy source of information and therefore should be listened.To build an ethos, a speaker can choose appropriate language for the audience.It can be done by making him/her seem or sound rational or fair, showing his/her capability or expertise, and using correct grammar and syntax.The following sentence is an example of ethos, “My three decades of experience in public service, my tireless commitment to the people of this community, and my willingness to reach across the aisle and cooperate with the opposition, make me the ideal candidate for your mayor”.Pathos means “suffering” and “experience”, and is also known as the emotional appeal.It is generally used by a speaker to persuade an audience by appealing to the emotions or sentiments of the audience.Pathos is used to raise empathy from an audience, making the audience feel what the speaker would like them to feel."In short, pathos deals with the appeal to the audience's emotions which, according to Plutchik, consist of 8 types of emotions: fear, anger, sadness, joy, disgust, surprise, trust, and anticipation.Usually, pathos is created by a speaker by arousing a pity as well as irritation from an audience; perhaps to speed action.A speaker can develop pathos by using meaningful language, emotive tone, emotion arousing instances, stories of sensitive events, and indirect meanings.The following is an example of pathos: “Peace of mind is the most important one.Our innovative security systems will secure the comfort of your family so that you can sleep peacefully in the night.,Logos is the appeal to logic and is used for persuading an audience by using logic or reasons.Logos can be formed by quoting facts and statistics, historical and literal analogies, and citing convinced authorities on a subject.Logos can be developed by using advanced, theoretical or abstract language, citing facts, using historical and literal analogies, and constructing logical arguments.The following is an example of logos: “The data is completely perfect: this venture has reliably turned a profit year-over-year, even despite market drops in other regions.,Previous studies on ethos, pathos, and logos suggest that the rhetorical elements played a critical role in the success of public speaking."Fengjie et al., for example, revealed several factors that supported the success of Obama's speech, one of which was various rhetorical devices in his speeches. "They identified seven rhetorical elements applied in Obama's speech that reflect ethos, pathos, and logos propositions: alliteration, simile, metaphor, metonymy, synecdoche, antithesis, and parallelism. "Ko found out that ethos, pathos, and logos were widely used in Taiwanese President Ma's political discourse on the cross-Straits Economic Cooperation Framework Agreement. "One of the interesting results of the study showed that Ma's pathos is abundantly filled with the negative elements of fear and anger, and positive elements of hope and security, especially in the question-and-answer session.However, like the works of Fengjie et al. and Ko most of the linguistic research focus merely on the semantic level than on the grammatical level, although grammar is also a determinant factor in persuasion.Because of its importance, analyzing the grammar of a public speech would be worthwhile.One of the prominent tools for analyzing grammar is Hallidayan SFL because it deals with interpersonal meaning or mood system, that is, how language is used concerning the relationship with other people.The concept of Systemic Functional Linguistics was first introduced by Halliday in the 1960s in the United Kingdom, and later in Australia.It is made as a grammar model that sees language as a set of semantic choices, which means people use language choices to produce meanings.The choice of different words and other syntactic or grammatical features will also have different meanings.One of the metafunctions in SFL is the interpersonal metafunction.It is related to the social world, especially the relationship between the speaker and the listener.Interpersonal metafunction regards clauses as exchanges.It can be described by explaining the semantics of interaction and the metalanguage that correlate with language as interaction and modality.In this regard, this current study suggests that speakers/writers must be able to use language in such a way as to position themselves before their audience/readers.Hence, this analysis focuses on the interpersonal meaning implied in a speech to see how the speaker uses his speech to persuade the audience because persuasion is closely related to the relationship between the speaker/writer and the audience/readers.The mood system has two basic terms: imperative and indicative.The indicative clause is related to the exchange of information, while the imperative clause relates to the performance of an action to provide services or to exchange goods.The indicative mood is divided into two: declarative and interrogative.Although both declarative and interrogative contain elements of tense, person, and number, they have syntactically and semantically different forms.Declaratives have the typical speech-function realization as statements that serve to provide information; while interrogatives are the mood of the question that serves to request information.The imperative mood is the mood of the verb and “the principal mood of will and desire”.This mood is characterized by a verbal group in the form of a basic form of a verb.Imperatives have typical speech-function realization as orders, requests, and directives.The imperative mood does not occur in subordinate clauses or subordinate questions because basically, this kind of mood is performative.In the semantic of interactions, clauses are used to analyze how the language is used to connect with others, negotiate relationships, and to express opinions and attitudes.According to Halliday, the relationship between speakers is made whenever the language is used to connect with other people.Halliday further explained that there are two basic types of speech roles: giving and demanding.Giving means inviting to accept, for example, ‘Do you want to have this book?’,On the other hand, demanding means inviting to give, for example, ‘Can I have the book?’,In the case of commodity exchange, Information and Goods & services are two types of commodities exchanged.Each type of mood includes different constituent structures.In this case, the complete English clause has several functional elements, namely Subject, Finite, Predator, Complement, and Adjunct.The type of mood of the clause is determined by the subject and finite position in a clause, while the clause residue is filled by a combination of Predicator, Complement, and Adjunct.Systemic Functional Linguistics is commonly used as the approach to analyze the functional meaning of a language.Many researchers had applied SFL from different dimensions and perspectives.Kamalu and Tamunobelema used SFL to analyze religious identities and ideologies construed in a literary text.They found out that SFL Mood analysis was useful to understand the structural based interpersonal relationships of the participants in the literary text.Ayoola analyzed some political adverts of two parties in Nigeria concerning the interpersonal metafunction.One of his highlighted findings is that the interpersonal meaning of a structure does not always correspond with its lexicogrammar analysis.The writers used different mood types to interact, negotiate, and establish their relationship with the readers.The mood system was also applied to change the readers’ behavior.Ayoola concluded that contextual factors profoundly influenced the mood types used in the adverts as well as their interpersonal meanings.This current research was a discourse analysis applying a qualitative approach to see how President Trump grammatically composed his ethos, pathos, and logos."The data source was taken from www.whitehouse.gov.In his speech, 74 sentences consisted of 71 major clauses and eight minor clauses.In this research, each simple sentence or complex sentence was counted as one clause.One compound sentence consisting of two major clauses was calculated as two clauses, depending on the number of main clauses that construct the sentence.The clauses were then classified into each element of persuasion.As the data, one clause belonged to ethos element; 50 clauses dealt with pathos, and 20 clauses referred to logos.The analysis was conducted by exploring the types of mood and speech functions as well as the mood elements that supported the elements of persuasion.The mood type and speech-function realization of his ethos clause can be described in Table 1.President Trump only used one clause to build credibility and was delivered in the declarative mood.The declarative mood functioned as a statement of fact and was composed by presenting a personal experience.It indicates that his main concern was to make a statement by providing convincing information about his credibility in addressing the topic of the speech.In this instance, he presented himself as a person with clear and unbiased thinking.The aim was to make the readers believe that the decision he would make through his speech would be true and reasonable.The subject of the clause was a personal specific subject in the form of the personal pronoun “I”.He used “I” to tell the audience that it is “he”, not the other else, who did something which implies his credibility."Through the ethos clause, he would like to say that it is he who made the promise to look at the world's challenges with open eyes and very fresh thinking.In communicating his self-credibility, the finite element of the clause used positive polarity.Such polarity gives a positive validity of the proposition and helps him create positive nuance to the audience about him."The positive polarity would make the audience directly understand that 'yes, he did it'.In building the ethos clause, President Trump made use of an adjunct as the theme of the clause, putting them in the front part of the clause.He began the clause with the circumstantial adjunct “When I came into office” that directly orients the audience to the very first time he became the President of the US.The other circumstantial adjunct reflects the core of ethos that he built.The phrase “with open eyes and very fresh thinking” convinces the audience that he does everything objectively with an unbiased decision, including his decision to move the US embassy to Jerusalem."In appealing to the audience's emotions, President Trump used more clauses which were dominated by declarative moods and, in a few occasion, imperative ones.The speech function realization of the declarative moods varies across clauses.However, the imperative mood was used in line with its typical function that is a direct request.The description of mood type and speech-function realization of his pathos clause can be seen in Table 2.In his pathos clauses, President Trump employed two kinds of mood: declarative and imperative moods.In these clauses, the declarative moods were dominantly used to function as statements.However, some of them were applied differently to make an indirect request to the audience as in.Another mood, imperative, was used to make a direct request to the audience as in.The declarative moods indicating pathos clauses functioned differently.At least four speech-function realizations of his declarative moods were identified.Firstly, it functioned as statements of opinion which commonly pointed to his belief or judgment about something or someone.The technique employed was presenting an evaluative opinion as in.In this instance, he used a negative declarative mood to evaluate the previous assumptions and strategies made by the previous presidents, which, according to him, were totally failed to solve the problem."Besides, the phrases ‘failed assumptions’ and ‘failed strategies’ in clearly indicated his negative evaluative opinion on the previous US governments' policies. "Similar clauses criticizing or blaming previous US presidents' policy on Israel-Palestinian conflict also existed. "It seems that the criticisms were utilized as the entrance door to pose the 'new' method he would like to deliver in solving the Israel-Palestinian conflict. "Secondly, the declarative mood in President Trump's speech functioned as a statement of an assertion.Through the clauses, he would like to show the audience that his decision is right and would result in a good impact.There were four techniques used: Affirming the decision, Asserting a commitment, Asserting the expectation, and Predicting an impact.The first technique, affirming the decision, can be seen in.In this instance, President Trump assured the audience that relocating the embassy to Jerusalem was the right decision."He touched the audience's emotion by using the modality 'has to' which indicates something imperative to do.In his speech, the main objectives of affirmation were making assertions that ‘It is the right decision’ and ‘It is the best time to do it’.The second technique is asserting a commitment to solving the problem as exemplified in.In this instance, he communicated to the audience that his government committed to supporting the two-state solution as long as the two parties agreed."This clause was clearly used to arouse the audience's positive attitude toward the US's commitment to solving the Israel-Palestinian conflict.The third technique is asserting an expectation as in.Through this technique, an emotion of anticipation was built.In this instance, President Trump presented his hope regarding the relocation of the US embassy to Jerusalem.In his opinion, such kind of decision would result in peace.The fourth technique is predicting an impact as in."This technique is utilized to answer the possible audience's question of ‘what would be the effect of the decision?’",In this instance, he predicted that the decision would soon invite a dissenting opinion from some parties."This clause would satisfy the audience's question of whether President Trump had anticipated the impact or not, specifically the negative one.Thirdly, some of the declarative moods function as a statement of inclination as exemplified in."In this instance, President Trump's assertion “I intend to do everything in my power” was used to convince the audience that he would do his best in solving the problem. "This clause is a strong personal inclination that may arouse the audience's positive feeling that he would be able to achieve his objective because he would use any resources to do it. "Fourthly, lexicogrammatically, President Trump's declarative mood, however, was not always aimed at giving information to the audience but also requesting someone to do something implicitly. "In some parts of his speech, he sought for audience's agreement to do the great thing.The clause indicates his request to the leaders of the regions to join him in the noble quest for lasting peace.In other words, he asked the audience to agree with him that expelling the extremists from their midst was the best way to create peace in the Middle East.Besides the declarative moods, three clauses were delivered in imperative moods to make a direct request to the audience to do something as can be seen in.The technique applied was a request with ‘let us’.The choice of using the inclusive ‘us’ was intended to make the audience feel involved and become closer to the speaker."In the case of the elements of moods employed for touching the audience's emotion, three pronouns were frequently used as the subject of the clause.President Trump used the inclusive ‘we’ 2 times and exclusive ‘we’ 4 times.The inclusive ‘we’ was used to involve the audience in the same problem as in “We cannot solve our problems …”."In this instance, “we” refer to the audience and President Trump's government indicating that the problem faced was his as well as the audience's problem.On the other hand, the exclusive ‘we’ was used to refer to the US government only as in “We are not …borders”."Here, the exclusive ‘we’ was utilized to assert to the audience about his government's position on the problem.Another interesting use of the pronoun in building pathos clauses is the impersonal ‘it’ as the subject of the clause, which was employed three times consecutively.It was used to emphasize the importance of the present time to do something.For example, in “it is time …midst”, President Trump told the audience that “it is the best moment” for everyone who wants peace to expel the extremists.President Trump utilized mostly positive polarity in building pathos, but some of the clauses used negative ones.He commonly used positive polarity, especially when talking about the purpose of his decision, Israel, and the Middle East.Meanwhile, he also frequently used negative polarity when talking about previous US presidents.He not only used ‘not’ but also negative-sensed words to indicate negative polarity as in “While previous presidents …failed to deliver”."The word ‘failed' clearly indicates a negative sense because, in this instance, it refers to the unsuccessfulness of the previous presidents in solving the problem.The tense in his pathos clauses varied from past, present, and future times."The past tense was utilized commonly to present emotional-touching facts about the previous US presidents' failure in recognizing Jerusalem as the capital of Israel.The simple present was used to present his decision and his inclination as in ‘we want an agreement that is a great deal for the Israelis and a great deal for the Palestinians”.The future time was used to communicate the emotion-touching impacts of his decision as in “But we are …cooperation”.In building pathos, President Trump began his clauses with adjuncts and subjects.He used the adjunct to precede the clauses for two main reasons: for cohesiveness and for emphasizing to the audience about the facts represented by the adjuncts.For example, in “While previous presidents …deliver”.The comment adjunct tells the audience that the previous presidents could only promise which implies they did not have a strong commitment to doing their best.This sentence was clearly used to arouse the negative sentiment of the audience toward the previous presidents/governments."The element of complement also plays a vital role in constructing pathos in which the important words, either in the form of noun phrases or adjectives, convey important messages that may arouse the audience's emotional reaction.For example, in “Our children … our conflicts”, the noun phrase ‘our love, not our conflicts’ gives an emotional touch."The use of contradictory words ‘love’ and conflicts' will easily arouse the audience's emotional reaction.President Trump seemed to assert to the audience indirectly that what he did was something related to ‘love’ not something that may inflict a ‘conflict’."The same thing occurs in “Let us rethink …” where the complements contain the power words to arouse the audience's emotional reaction.The phrase ‘old assumptions’ were used to degrade the policies undertaken by the previous governments as merely old assumptions and, therefore, should be forgotten.Another phrase, ‘our hearts and minds’, was clearly used to assert to the audience to use their hearts and minds in taking action.This phrase also implies that the previous presidents or governments did not use both of them to determine their policies which resulted in adverse impacts."In appealing to the audience's logic, President Trump used some clauses delivered in the declarative mood which functioned as a statement of fact.The mood type and speech-function realization of his logos clauses are described in Table 3.In convincing the audience of the importance of recognizing Jerusalem as the capital city of Israel, he posed many logical reasoning and facts."The clauses were mainly delivered in the declarative mood to give the audience some information and claim some factual facts about the Jerusalem Embassy Act, the law's waiver, and Jerusalem.As can be seen in Table 3, four techniques were applied: Presenting a precedent, presenting details of the precedents, presenting the third person opinion, and presenting a present fact.The first technique was presenting a precedent that was one of the important aspects of appealing to logos because it is commonly used as a justification of an argumentation.The precedents or some referential events in the past were posed by President Trump to support his deductive reasoning.For example, in, he posed a similar event in the past as the basis for his argumentation that Jerusalem should be recognized as the capital city of Israel.In other words, he would like to say that what he decided at that time was actually a manifestation of what had been decided by congress more than 20 years ago; therefore he had done the right thing on it.The second technique is by presenting details of the precedent as in.In this instance, President Trump gave further information about the Jerusalem Embassy Act as a precedent mentioned earlier.He informed that the Act passed Congress by an overwhelming bipartisan majority.He used the clause – and the next one – to explain why the Act should be adopted by his government.The third technique is presenting the third person opinion as in.Here, President Trump informed the audience of what the others say about the previous government.This clause was to support his argument that the previous presidents had failed to bring about peace.By presenting this clause, President Trump seemed objective about his judgment because the others agreed with him and had the same opinion.The fourth technique is stating a present fact as in.In this instance, President Trump provided the audience with information about Jerusalem.He used the clause to support the decision he made.Because Jerusalem is the seat of the modern Israeli government, it is reasonable now to recognize Jerusalem as the capital city of Israel.The same clauses relating to this matter are clauses number 21, 26, 27, and 28.All the clauses talk about the present fact of Jerusalem.In terms of the elements of mood, both personal and impersonal subjects were applied such as Israel, Jerusalem, US presidents, and congress.The word ‘Israel’ was used as the subject of the clause commonly to assert to the audience that Israel has the right to appoint Jerusalem as its capital city as in “…, Israel has made its capital in the city of Jerusalem”."Interestingly, he used ‘previous US presidents’ as the subject of the clause to confirm the audience that they had made a fatal mistake that is by implementing the Jerusalem Embassy Act instead of adhering to law's waiver, as in “Yet, for over 20 years, every previous American president has exercised …”.In presenting logos to the audience, President Trump applied both positive and negative polarity in the finite.In talking about Israel and Jerusalem he commonly used positive polarity, while in talking about previous US presidents he often used negative polarity.The negative polarity was represented not only by ‘not’ but also by negative-sensed words as in.Interestingly, when talking about Israel and Jerusalem, he mostly used simple present which means the propositions are valid in the present time.In other words, he would like to say that anything about Israel and Jerusalem is the current fact as in “Today, …government”.President Trump commonly began his logos clauses with the elements of adjuncts and subjects.The high use of adjuncts in preceding the clauses is for two main reasons.Firstly, he would like to connect a clause with another clause for cohesiveness, using conjunctive adjuncts such as ‘but’, ‘nevertheless’, and ‘yet’.Secondly, he would like to emphasize to the audience about the facts represented by the adjuncts."For example, in clause 21, the clause-like adjunct “It was 70 years ago that” represented the fact of US's recognition of the State of Israel.In connection with the results of the previous researches outlined in the literature review, this study identifies three aspects of SFL in the speech that are interesting to discuss: the high use of declarative moods, various speech-function realizations, and negative elements in the clauses.Firstly, declarative moods strongly dominated the speech."Unlike Ayoola's research which showed many variations in the types of moods in political advertisements, the type of mood in President Trump's speech tended to be monotonous in which the declarative mood dominated the clauses.This finding indicates that in delivering his decision in the speech, the President positioned himself mostly as a carrier of information to his audience rather than as a requester of information.The use of declarative mood helps him deliver his message directly without making a distance between him and the audience, as the nature of a declarative mood is to make a statement."It is unlike the imperative or interrogative mood, for example, which tends to make a distance between the speaker and the hearer since they require the presence of the audience's responses to seeing whether the proposition is successful or not.Hence, by applying the declarative mood, the message itself can be received instantly without requiring further thought and time from the side of the hearers.The President would like to assert that “It is the fact” or “it is true” which made the audience had no chance to challenge the information.The information provided by President Trump through the declarative moods varied throughout the rhetorical elements of ethos, pathos, and logos.In the ethos clause, he gave information to the audience that his decision was true and unbiased."In his pathos clauses, he generally used the declarative moods to touch the audience's emotions, arousing both the audience's positive and negative feelings.In the logos clauses, he presented some facts underlying his decision.Secondly, instead of functioning as statements, some declarative moods were used to make indirect requests to the audience which is one of the non-typical functions of a declarative mood, as exemplified in."This is in line with the result of Ayoola's research stating that the mood types in political discourse are not always in accordance with their typical speech functions. "This finding is also following Wardhaugh's statement that a change of topic requires a change in the language used.The mood types and, especially, the speech-function realizations also varied by the types of clauses."In President Trump's ethos clause, the mood type used was declarative which functioned as a statement of fact.In his pathos clauses, two types of mood were applied: declarative and imperative.The declarative moods had got different speech function realizations: statement of opinion, statement of assertion, statement of inclination, and indirect request.The imperative mood, however, functioned in line with its typical speech function, which is as a direct request.In his logos clauses, one type of mood was applied and functioned as a statement of fact."Thirdly, one of the characteristics of President Trump's speech is the presence of many negative elements in the speech clauses, especially in the pathos and logos. "His speech was like Ma's in that many negative elements were present in the speech. "The negative elements were the element that was not found in Obama's speeches, as revealed in the research conducted by Fengjie et al. "The negative element in President Trump's speech was in the form of blaming, especially of the former US presidents.He used many clauses containing negative evaluations as the tool for persuading the audience.In this speech, blaming of the previous US presidents or governments were built either in the pathos or logos clauses, especially by raising fear and disgust in the audience toward the previous US presidents and government.According to Mulholland, blaming is one of the tools for persuading the audience; because we tend to be polite and agree with the blamer."When something goes wrong, we often think that it is obviously the other person's mistake and that he/she should be blamed for the outcome.The negative element was also reflected through the finite of the clauses.President Trump used mostly positive polarity in building pathos or logos clauses, but in some clauses, he used the negative ones.When talking about the purpose of his decision, Israel, and the Middle East, he commonly used positive polarity; but when talking about previous US presidents, he frequently used negative polarity.He not only used ‘not’ to indicate negative polarity but also negative-sensed words as in “While previous presidents .. failed to deliver”.The word ‘failed’ in this clause clearly reflects a negative sense because it refers to the unsuccessfulness of the previous presidents in solving the problem."From the results of this current research, it can be concluded that the controversial side of President Trump's persuasive speech was not only because of its content which was actually controversial but was also because of choices of its mood types and speech function realizations.His many uses of clauses indicating a negative criticism could ignite disagreement even unrest in the opponent side.Being negatively criticized or blamed may cause fear, shame, or anger, and feeling worthless or incompetent.Therefore, the supporters of former presidents potentially got angry and shameful because they felt blamed.That is why the reaction from the opposite side also tended to be negative towards President Trump.Additionally, Greenberg stated that blaming can be a way of asserting power and social control.This is well reflected in many clauses built for blaming others.In these clauses, President Trump presented to the audience that he as well as his government was very powerful and well-controlled the situation.The results of this current research imply that public political speeches that contain many clauses of negative criticism toward political opponents have the potential to be a controversial speech."For the speaker's supporters, negative criticism towards the opposite side will make them more convinced that the policy being made was right and had to be met.For the opponent side, however, criticism will only make them increasingly dislike the speaker."President Trump's speech of Jerusalem can be categorized as a persuasive speech whose primary purpose is to get the audience convinced and persuaded about the subject matter of the speech.His attempts to influence the audience can be seen from the grammatical choices he made.From the results of the study, it can be concluded that the ethos clause was built by employing the declarative mood functioning as a statement to show his personal credibility; the pathos clauses were composed by implementing two moods: mostly declarative, which mainly functioned as statements, and few imperative moods to arouse both positive and negative feeling of the audience; and the logos clauses were composed by using the declarative moods functioning as statements to give bases for his argumentation.The high use of declarative moods indicated that he positioned himself as an information bearer, to shorten the gap between him and his audience.In the grammatical perspective, the controversial side of the speech is mostly caused by the presence of many clauses containing negative elements.Besides, the negative polarity of the clauses is evident in the speech, primarily when President Trump talked about previous US presidents and governments.A. Fanani: Conceived and designed the analysis; Analyzed and interpreted the data; Wrote the paper.S. Setiawan: Conceived and designed the analysis.O. Purwati: Analyzed and interpreted the data.M. Maisarah: Contributed reagents, materials, analysis tools or data.U. Qoyyimah: Analyzed and interpreted the data; Wrote the paper.This work was supported by LPDP, Ministry of Finance, Republic Indonesia .The authors declare no conflict of interest.No additional information is available for this paper.
This article presents an analysis of the nature of propositions made in President Trump's persuasive, yet controversial speech on Jerusalem from the perspective of mood analysis. The interpersonal relationships between the speaker and the audience concerning the building of ethos, pathos, and logos are revealed. It applies a discourse analysis with a qualitative approach to see how the President grammatically composed his ethos, pathos, and logos clauses. The results show that in the speech: 1) the ethos clause was built by employing the declarative mood functioning as a statement to show his credibility; 2) the pathos clauses were composed by implementing two moods: mostly declaratives which mainly functioned as statements, and few imperative moods to arouse both positive and negative feelings of the audience; 3) and the logos clauses were composed by using the declarative moods functioning as statements to give bases for his argumentation. The high use of declarative moods indicated that he positioned himself as an information bearer, to shorten the gap between him and his audience. Grammatically, the controversial side of the speech was mostly featured by several clauses containing negative elements such as blaming and negative polarity, especially when talking about previous US presidents and governments. Linguistics; Ethos; Pathos; Logos; SFL mood analysis; Speech
467
Mapping the Gaps: Gender Differences in Preventive Cardiovascular Care among Managed Care Members in Four Metropolitan Areas
We examined 1 year of medical and pharmacy claim, laboratory results, and enrollment data from one national health plan for 78,529 commercial health plan members with DM and 27,918 with CAD drawn from a population of 1,029,346 members across four metropolitan areas.The project was approved by the RAND institutional review board.Age was measured in years.Race/ethnicity was categorized as Asian, non-Hispanic Black, Hispanic, non-Hispanic White, or other, which included those for whom race/ethnicity data were missing.Quality of care measures assess whether the care provided adheres to evidence based standards of care.Specifically, we examined two screening measures, two intermediate outcomes, and one combined outcome.The HbA1c measures were examined only for those with DM and the combined outcome was examined only for those with CAD.LDL screening and control measures were examined separately for both those with DM and those with CAD.We drew on NCQA HEDIS specifications to compute these measures."We used Optum's EBM Symmetry Connect software, a decision support and population health management software that scans medical and pharmacy claim, laboratory results, and enrollment data to identify members eligible for the selected EBM measures, and flags those who did and did not receive indicated care or meet indicated clinical threshold.We then calculated EBM scores or rates as the percentage of eligible members meeting the standard for care for a specific measure."We used the χ2 and Fisher's exact tests to compare men's and women's unadjusted performance scores.For each of the seven quality measures, we built sequential logistic regression models.The first set of models tested for a gender gap in quality of care, adjusting for age, race/ethnicity, and income.In the second set of models, we assessed whether variables commonly tracked by population health management tools and teams explain gender differences in quality of care.Variables added in model 2 included a member risk score for having gaps in EBMs, the log of total medical cost for each member, a health management prioritization score, and use rates.Finally, in model 3 we added a set of variables the plan used to determine whether a member had been identified as eligible for one or more disease management or behavioral health programs.We present this sequence of results related to each EBM.Women were younger than men among members with DM and those with CAD.Whereas men and women were almost equally represented among those with DM, among the members with CAD, men outnumbered women two to one."Among non-Hispanic Black members, women's representation was higher than men's for both conditions but only for CAD among Hispanic members.Men had higher representation in both disease groups among non-Hispanic White members.Race/ethnicity varied significantly by metropolitan area, with larger Hispanic populations in Houston and Southern California and larger Black populations in Atlanta and Houston.Members with CAD were more concentrated in the New York City/Northern New Jersey area.However, within each disease group, men and women were distributed similarly geographically.Gender differences varied considerably across quality measures.Among members with DM, gender differences were less than 2 percentage points for all but one of the four quality measures."Performance rates were similar for men and women for HbA1c screening and for HbA1c control, women's performance rate exceeded men's by 1 percentage point.For LDL screening among members with diabetes, there was a small but statistically significant difference favoring men."However, in the case of LDL control among members with diabetes, men's performance rate was 5 percentage points higher than women's.Unadjusted, gender gaps were larger among health plan members with CAD than those with DM, particularly for LDL control.The LDL screening rate for members with CAD was more than 2 percentage points higher for men.Although LDL control rates were much higher among members with CAD than DM, the overall gender gap was also larger.Taking statin use into account reduced the gender gap."For the combined measure of LDL control or statin use, men's performance rates exceeded women's rates by 8 percentage points.Unadjusted gender differences in performance rates also varied by metropolitan area, but the basic pattern of gender gaps in LDL control favoring men was similar to that for the overall plan population.Herein we focus on measures for which the overall gender gap was more than 2 percentage points.Among members with CAD, men experienced LDL screening rates that ranged from 0.4 to 4.9 percentage points higher across metropolitan areas."Among members with CAD, men's LDL control rates exceeded women's rates by 9.9 to 17.2 percentage points.The maps shown in Figures 3 and 4 track geographic variation in gender gaps in LDL screening and control among plan members with DM.In the case of LDL control among members with DM, performance rates were 3.5% to 6.0% higher for men in all four metropolitan areas.Sequential logistic regression models identified factors associated with not having received indicated care on each of the seven quality measures.Model 1 adjusted for demographic measures and metropolitan area.Women and men with DM did not differ significantly in odds of having had an HbA1c screening in the last 12 months, either for the overall population or in any of the 4 metropolitan areas.For HbA1c control, women were less likely than men to have a gap in indicated care.Women with DM faced greater odds than men of not having received an LDL test in the last 12 months.These results also vary by gender across the geographic areas.Atlanta had the greatest gender difference, with no significant difference observed in Houston or Southern California.Women with diabetes were more likely than men to experience a gap in LDL control.Results varied by gender across the markets: Atlanta had the largest gender difference and Southern California the smallest.Model 2 adjusted for additional case mix factors related to medical cost, risk, and use of services.After these adjustments, the gender difference in HbA1c testing becomes significant and favors men, the gender difference favoring women in HbA1c control becomes nonsignificant, and the differences favoring men for LDL testing and LDL control both increase.In our final diabetes management model, we adjusted for having been identified as eligible for chronic disease management.There is no meaningful change in the results, which indicates that the algorithms identifying and programs offered to and received by some plan members for their diabetes management do not compensate for the observed gender differences in care received.In model 1, men with CAD were more likely than women to have had an LDL test in the last 12 months.These results also vary geographically.The odds of a gap in care were higher for women in Southern California than New York City/New Jersey and there was no significant gender difference in either Atlanta or Houston.Women were also more likely than men to experience a gap regarding LDL control.These results also varied geographically, with much higher odds in New York City/New Jersey compared with a low in Houston.Women with CAD were also more likely to experience a gap in care, as demonstrated by not having achieved LDL control or taken a statin in the last 12 months.These results also varied geographically, with the greatest gender gap in New York City/Northern New Jersey compared with Houston.Notably, across all of these models, the gender gaps were larger than gaps associated with being a racial/ethnic minority or those associated with lower income.After adjusting for medical cost, risk, and medical use case-mix variables in model 2, the gender differences for all three LDL measures continue to favor men and remain statistically significant.Finally, in model 3 we included population health management case-mix variables to adjust for having been identified as eligible for chronic disease management.As among individuals with CAD, there is no meaningful change in the results, which indicates that the algorithms that identify eligible members and programs that some plan members receive for their CAD do not compensate for the observed gender differences in care received.Despite considerable public and private efforts to improve quality of care for CAD and increase awareness of CAD in women, in this analysis of 78,532 health plan members with DM and 27,918 with CAD, we found significant gender differences in quality of preventive cardiovascular care, with some variation across four major metropolitan areas.Although there were only minor gender differences on screening measures, such as whether at-risk patients had an annual LDL cholesterol test, there were large gender gaps in whether LDL levels were controlled, a key outcome of care.The gender gap in LDL control was especially large for patients with documented CAD, averaging 15 percentage points higher for men than women.The gender gaps were not specific to one region and varied considerably between and within regions.Gender gaps in LDL control among plan members with DM were smaller than for CAD, but the potential impact was still substantial.There are far more patients with DM than CAD.Of the 1 million adult health plan members considered for these analyses, there were nearly three times as many members with DM as with CAD.If the women whose care we examined had experienced quality of care with respect to LDL control on par with the men, another 1,282 women with DM and 266 women with CAD would have had their LDL controlled.Although we have focused on the absolute percentage point differences, as is the norm, there is also value in considering the relative difference in rates when the overall rate is low, as is the case for LDL control among DM members.In terms of relative risk of having a gap in LDL cholesterol control, among members with DM, women were 21% more likely than men to have uncontrolled LDL cholesterol, and among members with CAD women were 25% more likely than men.Our findings are consistent with previous studies showing gender gaps in preventive cardiovascular care.These studies include analyses of gender disparities in large samples of enrollees in commercial and Medicare plan ambulatory settings in 1999 and more recently in 2005.Although these studies examined data from different healthcare plans and periods of time and used somewhat different analytic models, results were consistent with prior studies.Both studies found moderate gender gaps in screening and larger gaps in control.Similarly, a pilot study in California found gender gaps in LDL screening that varied geographically.Geographic variation in the gender gaps in care suggests that the problem is not immutable and can help to target high-risk areas and build consensus among providers of care on such topics as CAD risk and need for therapy like statins.More recent analyses in Veterans Administration found gender gaps in control, which the VA has narrowed over time.Perhaps more notable than the gender gaps themselves is the lack of progress in narrowing the gap in LDL control, particularly outside the VA, in light of the much smaller gaps in screening.Unfortunately, recent reports on statin use and cholesterol control are not available from many sources, including the recent CMS and RAND report on gender disparities in care among Medicare Advantage beneficiaries, because the NCQA retired the HEDIS cholesterol control measures.In 2013, the American College of Cardiology/American Heart Association Task Force on Practice Guidelines released updated guidelines on cholesterol treatment.Citing a lack of evidence for the existing targets for cholesterol control, the new guidelines removed the targets and recommended high or moderate-intensity statin therapy based on patient risk factors.Of note, the measure was dropped because further decreases in LDL beyond the target were found to be beneficial, not because there is no value in attaining the former standard.As such, gaps in meeting the former standard still represent significant shortfalls in secondary prevention of CVD.Moreover, the American Heart Association responded by urging the NCQA to “create a replacement cholesterol measure as soon as possible that can be incorporated into future HEDIS measure sets.,In 2015, Medicare removed the previous LDL cholesterol goals from its quality measures as well.The NCQA subsequently included new measures for patients with CVD and DM focused on statin therapy and adherence rather than LDL cholesterol levels."Similar statin therapy measures were soon added to other quality measure sets, such as the CMS's eCQM.Although these new statin therapy measures have merits, it remains unclear how they are affecting gender gaps in CVD care and outcomes, or whether the absence of an LDL control measure has made persistent gaps invisible.Of particular concern is the evidence that CVD deaths have increased since the LDL control measure was dropped.According to the National Forum for Heart Disease and Stroke Prevention, “In 2015, the death rate from heart disease in particular increased for the first time in 22 years and the rate of people dying from stroke rose for the second time in 2 years”.Although the intention of the change in practice guidelines was to increase statin use, with no measure in place it is not possible to assess whether and to what extent women and/or men are benefitting from the change except by recreating the prior measure.A recent analysis of statin use by gender among Medicare beneficiaries with CVD and those with diabetes examined trends in statin use in 2010 through 2011 and 2012 through 2013, and found gender gaps of almost 15 percentage points favoring men among CVD patients and no difference among those with diabetes but not CVD.We know of no comparable work in the commercial population, and more work will be needed to assess the extent to which men and women are currently receiving guideline concordant care in the commercial, Medicare, and Medicaid populations.However, earlier research by Mosca et al. found that women with intermediate cardiovascular risk as assessed by the Framingham Risk Score were significantly more likely to be assigned by providers to a lower risk category than were men with identical risk factors.Assigned risk level significantly predicted lifestyle recommendations and preventive pharmacotherapy."The pattern of underestimating women's risk was similar for primary care physicians, obstetrician/gynecologists, and cardiologists.Providers were also less likely to prescribe statins to women and to increase the dose to achieve adequate LDL control, although this only partly explained the observed gender gaps in care."To assess whether the difference in care is attributable to gender differences in the possibility of achieving control over high LDL cholesterol, the VA added a new measure of control or statin use, which we also examined here.Taking treatment as well as control into account narrowed the gender gap, but a 7% gap remained.Similarly, we found that a combined measure reflected reduced gender gaps, but differences remained."This finding raises the question of whether women's lower levels of statin use and of achieving control are a function of patient preferences and behavior or of biological differences in women's response to statins.There is some evidence that women, especially younger women, underestimate their cardiovascular risk; also, women may experience more side effects and musculoskeletal pain, or more attempts at finding a tolerable statin."These factors may contribute directly to women's perceptions of statins based on reports of other patients and to their experience with taking statins themselves.However, adherence to statins among women in the VA is only slightly lower than among men, and our findings demonstrate smaller gaps among those with diabetes than those with CAD."Moreover, it is unlikely that women's lower rates of treatment and control are due to less care seeking, because we found no gaps or gaps favoring women in screening and other measures such as HbA1c.Gender biases found in provider studies may to some extent have been built into some of the industry standard care management and population health management software programs that are increasingly used by large provider groups and accountable care organizations.Our finding that gaps persisted after accounting for eligibility for care management programs based on one such software system suggests that the algorithm itself may need to be adjusted to better recognize risk among women, especially younger women.Algorithms may have been developed to assist with care use and costs in the short run rather than in the long run, when younger women with higher CAD risk begin to experience worse disease trajectories.Notably, gender gaps in care were on average larger than racial/ethnic or socioeconomic gaps, which, when observed, were largely accounted for by care management case mix variables.This suggests that within this health plan at least, decision algorithms were well-calibrated to detect minority or low socioeconomic status members at risk.Further work is needed to recalibrate these population health management algorithms and to further assess the possibility that assigning more women with CAD risk to health improvement programs could improve outcomes and lower costs in women, especially younger women.Our analysis is based primarily on administrative claims rather than electronic health record data, and no chart review was done.Performance rates on quality measures based on administrative claims may differ from those based on electronic health record data or chart review.However, the gender gaps tend to be similar, regardless of whether claims or electronic health record data are used."Because geographic mapping of gender differences in care is novel and not based on a representative national sample, we are limited in understanding whether the areas we identified represent unusually high or low levels of variation in men's and women's care.Additional work is needed to assess whether and to what extent observed gaps have varied, improved, or worsened over time.By examining care in four major metropolitan areas, representing different regions, we were able to include use patterns, health risks, and eligibility for existing programs aimed at improving patient outcomes as possible explanatory factors.Another strength was that we included information on eligibility for and engagement with behavioral and disease management programs.Taken together, these rich data provided an opportunity for a more extensive picture of gender gaps in care and of opportunities for algorithm improvement and then intervention.Gender gaps observed in prior studies persist.Our analyses also showed that, although the levels and gaps in quality varied somewhat across the four regions, the models did not.Interestingly, patterns of use and risk among members with CAD or DM explained little to none of the gender gaps in care.Moreover, existing behavioral and disease management algorithms seem to underrepresent women, and particularly younger women, despite their established CAD or DM."This finding suggests that gender biases that may at times favor men's care over women's may be reflected in the off-the-shelf algorithms designed to identify patients for disease management used by many health plans nationally and within specific regions or states.These algorithms can be cloned, changed, and tested to address gender disparities in care.Our findings can help health insurers to assess opportunities to improve algorithms to better identify eligible members for programs and increase engagement among women in those programs."In future work, it will be possible to assess the extent to which changing the algorithms and flagging more younger women for new and existing disease management programs closes the gaps and whether additional intervention efforts improve women's care. "Moreover, this lays the foundation for assessing algorithms used to measure gaps in care and whether programs and interventions differentially impact men's and women's care. "Awareness of and action to address gaps in the quality of women's CAD and DM care are limited, in part, because quality of care is not routinely measured and reported by gender.Conventional methods of measuring and reporting quality of care focus on average quality performance scores across the overall population that a plan serves in different markets or regions; separate assessments and reporting by gender or local area are rare.Without routine tracking and reporting of quality of care by gender, the care received by women is generally assumed to be equal to that received by men, if not better for women owing to their greater health care use."As a result of this assumption, the quality gap in CVD and diabetes care remains largely invisible to individual women, providers, payers, and policymakers, even among those seeking to improve women's health and health care.In cases where gender gaps in care have been monitored and targeted, such as in recent initiatives by the Veterans Health Administration, marked reductions in gender disparities in CVD and other types of care have been achieved, although some gaps persist.Routine analysis and reporting of these measures by gender would bring attention to the problems."Mapping quality of care by gender may be particularly helpful in identifying the scope of the problem and helping to engage relevant decision makers in local metropolitan areas in discussions and explorations of ways to improve routine aspects of women's care.Moreover, these findings can be used to help women understand the extent to which they may be at risk of experiencing gaps in care.When gaps in care are invisible, they are intractable.Visual displays such as maps make information accessible and relatively easy to interpret while improving identification of geographic regions for targeted improvement.Using geospatial technology to generate maps for quality of care metrics and making the information accessible to key stakeholders provides the opportunity to better address disparities in CVD management for women."Closing the gender gaps in CAD and DM care could improve women's quality of life and longevity.Increased attention to CAD and improving CAD-related care for women is warranted.Unmet need for CAD and DM screening and treatment contributes to avoidable morbidity, mortality, and costs.Moreover, the extent and variation in gender gaps in care suggest that gender-stratified reporting could shed light on differences in quality of care and facilitate quality improvement.Both health insurers and provider groups need to examine whether algorithms and programs intended to close gaps in care are effectively overcoming gender gaps in care or outcomes.
Background: Prior research documents gender gaps in cardiovascular risk management, with women receiving poorer quality routine care on average, even in managed care systems. Although population health management tools and quality improvement efforts have led to better overall care quality and narrowing of racial/ethnic gaps for a variety of measures, we sought to quantify persistent gender gaps in cardiovascular risk management and to assess the performance of routinely used commercial population health management tools in helping systems narrow gender gaps. Methods: Using 2013 through 2014 claims and enrollment data from more than 1 million members of a large national health insurance plan, we assessed performance on seven evidence-based quality measures for the management of coronary artery disease and diabetes mellitus, a cardiac risk factor, across and within four metropolitan areas. We used logistic regression to adjust for region, demographics, and risk factors commonly tracked in population health management tools. Findings: Low-density lipoprotein (LDL) cholesterol control (LDL < 100 mg/dL) rates were 5 and 15 percentage points lower for women than men with diabetes mellitus (p <.0001), and coronary artery disease (p <.0001), respectively. Adjusted analyses showed women were more likely to have gaps in LDL control, with an odds ratio of 1.31 (95% confidence interval, 1.27–1.38) in diabetes mellitus and 1.88 (95% confidence interval, 1.65–2.10) in coronary artery disease. Conclusions: Given our findings that gender gaps persist across both clinical and geographic variation, we identified additional steps health plans can take to reduce disparities. For measures where gaps have been consistently identified, we recommend that gender-stratified quality reporting and analysis be used to complement widely used algorithms to identify individuals with unmet needs for referral to population health and wellness behavior support programs.
468
Graphics cards based topography artefacts simulations in Scanning Thermal Microscopy
Scanning Thermal Microscopy is a special Scanning Probe Microscopy technique dedicated to measurements of local temperature and heat transfer phenomena .Such measurements are important namely in the field of microelectronics and in various nanotechnology fields where local power dissipation and heat generation plays some role.It offers far the best spatial resolution out of all the thermal techniques, however despite its long development, the uncertainty of the measured temperature or thermal conductivity is still very large on most of the devices, if the method is made traceable at all.In most of the commercially available scanning thermal microscopes a local resistive heater and/or temperature sensor, either in a form of a microfabricated probe or a very thin wire bent to form a probe is used .This probe is scanned across the surface in a standard contact mode, therefore microscope provides at least two channels of simultaneously provided information – topography one and temperature related one.The inevitable presence of some surface structures and irregularities on the measured sample is also one of the big problems of the uncertainty evaluation as these lead to artefacts in all the thermal channels .In case of local temperature measurements these can be prevented using null point technique in some of its variants – evaluating the apparent temperature for zero flux between the probe and the sample.In case of measurement of local thermal conductivity the results are based on the non-zero heat flux, thus such approach cannot be done.Topography artefacts in SThM are related to changes of the flux while there is a variance of the contact area of the probe depending on local sample geometry or while there is a variance of sample volume where heat can flow into different parts of the sample.If the probe is located e.g. on the edge of a flat sample surface, we can expect that the heat flow between the probe and the sample will be approximately twice lower compared to the situation when the probe is at the center of the sample.At the edge we have twice smaller probe-sample contact area and less material in probe vicinity where heat could flow to.On real samples the probe-sample area is varying rapidly both due to microscale objects that may accidentaly lay on surface and random roughness that is present nearly everywhere.One possible way of treating the topography artefacts is to model them and to correct the measured data afterwards, or at least detect which parts of the data are influenced by the artefacts and which may contain other relevant information.The biggest problem in such simulations is that we need to perform pixel-by-pixel simulation of the probe-sample response, forming a virtual SThM image.The number of individual calculations of the probe-sample interaction is given by number of pixels of the final image, which usually means hundreds of thousands at least.For such number of individual calculations, most of the modeling tools are too slow.In previous articles we have implemented and compared various methods for calculation of a virtual SThM image that could be used for estimation or removal of the topography artefacts.Some of the methods tested were correct, but slow, e.g. Finite Element Method.Some produced apparently nice results, but only qualitatively, being rather fast, e.g. Neural Network.Due to this tradeoff between physical correctness and speed we have used the simpler techniques like Neural networks so far, which are however limited to some class of surfaces and need very careful measurement and neural network training.Moreover, all the physical content is hidden in training of the neurons and the result is therefore not exact solution of some physical equation.The next logical step is to make physically correct solution of the tip-sample heat transfer fast enough for practical purposes.We have developed a methodology and associated numerical tool that is focused on fast calculations of virtual SThM images.In contrast to general packages it is almost single purpose software.This allowed us to optimize the calculation speed much further than what would be possible with an universal software.We have started with a very simple numerical approach based on Finite Difference Method with regular equally spaced three dimensional mesh and we have optimized this for fast calculations of steady state heat transfer in slightly changing tip-sample configurations as the tip is scanning across the sample surface.It should be noted that the presented approach is still not covering all the physical phenomena that would need to be taken into account for heat transfer in all the scales observed in SThM experiments.On the small scale, the heat flux through the air gap between the probe side and the sample surface is ballistic and should be modeled using another approach .Also in the probe and in the sample itself, the heat flux is not necessarily diffusive.There is some meniscus formed by capillary condensation in the gap between the probe and the sample and some of the heat is transferred through the water layer .On the other hand we believe that for larger probes, at presence of different surface contaminations, adsorbed water layer, etc., the heat flux is so complex that assumption of the simple diffusive heat transfer is still a good approximation for topography artefacts compensation.Moreover, some of the promising models for taking the ballistic heat transfer into the account, e.g. based on some flux lines evaluation would also benefit of the computed Poisson equation solution, so the developed model can be also used as a first step for building more complex and more physically correct model for these small scale calculations.The most common method for solving the equation is based on discretization of the domain of interest into finite number of elements.It is important to study mutual interactions between the neighbouring elements.The actual temperature field is thus determined by the object’s geometry, the distribution of thermal conductivity and boundary conditions.There are two types of boundary conditions in our case.At some parts of surface we know the value while other parts have zero heat flux to or from the surroundings.Any point of the surface is one of those two distinct types.The known value of temperature corresponds to modeling the contact with an external thermostat, i.e. the ambient room air.The second type of element is of unknown temperature, but it is given that it is surrounded by a perfect insulator.In the algorithm, the Dirichlet boundary condition is easy to implement.Since we know the value of a particular boundary element, there is no need to compute its value and the calculation is just skipped.On the other hand, the boundary element with Neumann conditions is similar to any other element from inside the object with respect to zero net heat flow.The difference is only in the number of neighboring elements.On the surface, there are less neighbors than in the inside.Again, the numerical value of any such element is weighted average of its neighbors.The weights are calculated from the thermal conductivities.The thermal conductivity of any element outside the object is zero due to an assumption of perfect insulator.It can be easily tested that an algorithm which repeatedly sets all values to weighted average of the neighbors eventually converges successfully.After each step the values are closer and closer to the correct value and also the correction is smaller in each step.As a criterium of convergence the deviation from weighted average can be used.If the largest correction of all the elements is smaller than some pre-defined value, the computation can be considered as sufficiently precise.The heat flux is calculated using the Fourier’s law, where the gradient is taken as a difference of neighboring elements multiplied by local thermal conductivity.The gradient is generally a vector, but since we need to know only the heat flux inwards or outwards, only one component of the vector is calculated.This is even more simplified by the fact that we have a prism-like object.Only those surface areas with known temperature can act as sources or sinks of heat power.In our case, the main goal is to calculate the thermal resistance between the heated tip and the ambient.For this reason, we calculate separately the heat power flowing inwards and outwards.Obviously, the heat source has higher temperature than the heat sink.The heat source represents the heated probe of the thermal microscope.The heat sink resembles the ambient.In the numerical calculation, the actual temperatures of the tip and the ambient can be arbitrary values, because the overall heat power is finally divided by the temperature difference.In fact, two results may be useful for further processing:the heat power at a certain temperature difference: This resembles the behavior of the thermal microscope, where the heat power is calculated as current times voltage across the heated tip.the heat power normalized to one Kelvin: This property in watts per Kelvin is the thermal conductivity of the whole object.Reciprocal value is heat resistance in Kelvins per watt.It is useful for comparison of various geometries and objects as it is more logical to use a property independent on temperature difference.All the mentioned results can be calculated using the algorithm described above, only the whole geometry is three dimensional so most of the elements have six neighbors.The typical geometry is a rectangular box shown in Fig. 2, where the heat source is placed on top face and the heat sink is the whole bottom face.All elements on side faces fulfill the condition of zero net heat flux.A custom built code for the Finite Difference Method was written according to the basic principles discussed above.A client-server approach was used to ensure access to fast simulations to wider community of users, not necessarily having the high-end graphics card installed on their own system.This also means that the calculation can be run directly from computer on which the scanning is performed, without any effect on the performance of the microscope software.This approach is also used to allow users from outside of the Czech Metrology Institute to access the calculations on request.Client is a simple standalone module for open source software for SPM data analysis Gwyddion .It is focused on topography artefacts simulation only and it does the mesh generation for both the probe and the sample, passes the data to the server and requests calculations at discrete points of the final virtual SThM image.Calculations don’t need to follow a regular mesh as the library for non-equidistant measurements in SPM Libgwyscan is used for virtual scan path generation and results triangulation back to regular mesh.This is used namely for preview purposes as the successively refined image is more suitable for fast check of possible input parameter errors than image that is computed line by line in final resolution.For the simulations, i.e. on the server side, we used our own code written in the C language.This code is slightly more general than the client one and in principle allows using any sample and probe mesh and control more calculation parameters, however it is still optimized for purposes of virtual SThM measurements as described below.It was compiled using the GNU C compiler and the CUDA framework .The code was intended for a regular user who has no access to a supercomputing facility, so the performance of the code was tested on a gaming PC equipped with easily available graphics card.The configuration of our server PC is as follows:CPU: Intel® Core™ i7-6700K running @ 4.00 GHz, socket LGA 1151.RAM: 32 GB DDR4 running @ 2.8 GHz.GPU: 4×GIGABYTE GeForce GTX TITAN X 12 GB GDDR5 memory.OS: 64-bit Linux openSUSE Leap 42.1, GCC 4.8.5, CUDA 7.5 framework.It is worth to mention that the computer components and the graphics card used are easily available and that especially the cost of the GPU is much lower compared to HPC class Tesla products with comparable computing power.As shown above, the Finite Difference Method is quite simple and in its basic form it would not lead to speedup of the computations compared to the Finite Element Method as the number of cubic elements that we need to use in FDM is much larger than number of tetrahedral elements of arbitrary size and orientation in FEM, covering the object surface with the same accuracy.However, due to simplicity of FDM the method is ideal for parallelization and computation of virtual scans.First, the same calculation is done for each element of the mesh, independent on the other elements.Second, it is not memory demanding as no large matrix needs to be created.Third, the sample problem is repeated in the adjacent pixel of the virtual scan and both the mesh and results can be partially recycled.The mesh is generated in two steps.First, the data used for sample surface description are loaded into Gwyddion open source software and regular grid model of the sample is built and saved as VTK file.Optionally, various thermal conductivities can be assigned to different areas on the sample or to various layers below the surface.Then the probe shape is generated or loaded as a surface.The probe is also converted to a regular grid mesh and saved as VTK file.The two VTK files are then passed to the solver, which merges them putting the probe to desired position above the surface, according to selected parameters and to the pixel of the final simulated image that is computed.An example of the complete mesh is given in Fig. 2 together with the computation result.In order to be able to compute all the pixels of the final image in a reasonable time, we concentrated on maximum possible speedup of the calculations.This can be split into two parts: optimizations at the algorithm level and use of the graphics card.We are using iterative solver, so it is important to initialize it with sufficiently good values already.As we are repeating many very similar calculations, going from one pixel on the final image to the other, it is useful to search for the best possible initialization procedure.In Fig. 4 the effect of various regimes of the initialization on convergence is shown, starting with filling the whole area with zeroes and with linearly interpolated value obtained from the top and bottom fixed boundary condition.The more advanced initialization is represented by solving the problem for the same model but scaled down by some factor and using the result for initialization.Finally, while simulating the scanning process, we move only slightly from one probe position to the other.As the grid is uniform, we can use the resulting temperature array from the previous step as initialization values, only shifting it and padding by the values at the border.In Fig. 6 the effect of this extension is shown, we can see that for small shifts it can generate an extra speedup of the calculation.All the above mentioned methods work both on a CPU and a GPU, however the use of a GPU leads to a further speedup.The biggest difference between the GPU and the CPU is in the number of cores, that can be in order of hundreds in case of the GPU.In contrast to the CPU, the code cannot be run on each core independently and the same instruction is run on more threads within the GPU.It is therefore good to keep the code relatively simple, which suits well the simple mesh and iterative procedure in FDM.In our case the problem is split to individual mesh points and each is computed within one thread.Thread block dimensions are optimized for speed on particular graphics card.Data are loaded at the beginning of the computation into the GPU and all the operations are handled within GPU to limit the memory transfers, that are relatively slow.On the other hand, the use of different memories within the GPU is not optimized so there is still possibility of further improvements of the code.The developed computational approach is fast, however we can expect that the simplicity of the mesh and the calculation approach will lead to reduced accuracy compared to more advanced differential equations solution techniques, like Finite Element Method.In Fig. 7 the method is compared to FEM using a test task that we propose for comparison of various numerical techniques in SThM probe-sample interaction modeling.The heat transfer between a half-sphere and a planar sample is computed for various values of the gap between them.The test is intentionally very simple and recalls only very basically the real probe-sample configuration in SThM.The FEM calculation was done using SfePy software.The computation time depends strongly on mesh density as well as the precision of the result.After thorough fine-tuning of spatially variable mesh density the reasonable number of elements was chosen to be 500,000.It was tested that doubling the mesh density changes the results by less than half a percent.One computation on one CPU core took 10 min on average.One hundred mutual distances between the tip and the plane was calculated on one hundred cores of a supercomputer.Thanks to such parallelization, all the computation took 10 min.The FDM calculation was done using a single GPU of the above mentioned computing system; a single point was computed within 3 s. Two different results corresponding to a different discretization of the computational volume are shown – one for a small grid with spacing of 2 nm and one for a large grid with spacing of 1 nm.We can see that, mainly due to the staircasing effect, the grid does not fully represent the half-sphere shape and that the coarse grid leads to poor results, even if these have expected dependence on the probe-sample distance.Even for the large grid we can see differences between the results obtained using FEM and FDM, which can also be caused by discretization issues.When the probe-sample gap is small, the effect of staircased probe apex becomes larger, relative to the gap.Similar effects can be expected also in other simulations using FDM, so this test is also useful to guess what accuracy we can expect.In our approach, we assume the FEM results to be correct and any disagreement with FDM are the fault of FDM.Ideally, we should make comparison not with FEM, but with SThM directly.This is not easy, as most of the SThM results are still far from being quantitative, e.g. producing the power transferred between probe and sample or tip-sample system thermal resistance which both can be result of the FDM calculation.Moreover, there are some heat losses from the probe that cannot be captured by the presented model – e.g. losses to the probe itself and large part of the losses to the air; such effects don’t have influence on the topography artefacts that we want to simulate, but affect significantly the entire energetic balance of the probe.That’s why we proposed another approach on the data evaluation in one of our previous papers performing a linear transformation of the computed data to fit the experimental results and evaluating the sample properties from non-linear components of the signal.The calculated thermal resistance is multiplied and shifted to such extent to get the best fit with the experimental data.The thermal resistance is a suitable physical property to be both measured and simulated.Thermal resistance is a ratio between the temperature difference and heating power, thus having a unit kelvins per watt.In the simulation, the temperature difference is given as an input parameter and heating power is found as a computation result calculated as an integral of heat flux over surface area.In the experiment, the heat power is known from current and voltage on the probe and the temperature is calculated from electrical resistance.We don’t go into details of the computation procedure here as this is not the aim of this paper, however this is to explain that all the comparisons to the real data further in the paper are done with the help of this linear transformation.When the theoretical knowledge of the SThM heat transfer progresses so far that we will obtain more quantitative data, the linear transformation step will be obsolete.Probably the simplest sample used in nanoscale metrology is a step height standard, formed by a stripe of a material on top of a flat sample surface.Already on this simple structure we can see plenty of topography artefacts while measured by SThM and therefore we present some of the computation results for this type of sample first.The benefit is that the sample is homogeneous in one direction and therefore it is enough to present individual profiles extracted perpendicular to the step which not only simplifies the calculation but also quantitative results presentation.In Figs. 8A and B the SThM measurement on a 40 nm step sample is presented.Measurement was performed using Dimension Icon from Bruker, SThM electronics from Anasys Instruments and Bruker VITA-DM-GLA-1 probes.In Fig. 8C the averaged profile extracted from the thermal channel data is presented together with the result of the simulation.The simulation was based on the idealized step height geometry, tip apex radius evaluated by blind tip estimation and table values of thermal conductivities.It should be noted that the linear transformation itself has large effect on the apparent agreement between the experimental and measured data and purpose of this illustration is not to evaluate the step material thermal conductivity, but to show the similarity of the topography artefacts to the real ones even with a simple sample model.The simple structure of the step height sample is good also for further tests of the influence of various sample and probe properties on the topography artefacts.From numerous parameters that we could vary we show in Fig. 9 the effect of the stripe material thermal conductivity, and the effect of the tip shape.Both results are confirming what one would expect – the lower thermal conductivity leads to smaller heat transfer and the larger probe leads to larger topography artefacts.Finally, the calculation time for individual data points in the profile is shown in Fig. 10, showing the effect of re-using the values computed in the previous step on the speedup of the calculations.As we want to calculate the virtual SThM image from the topography measured by the SThM tip, we need to find the true surface first, i.e. to remove the probe-sample convolution effects from the surface topography.The whole calculation was done using the following steps:Probe apex shape is determined using the blind tip estimation algorithm .To do this the surface needs to have enough topographic features like sharp spikes, which is the case of the used surface.In our case it was possible; the estimated probe had radius of 96 nm in fast scan axis direction and 68 nm in slow scan axis direction.Surface reconstruction is performed in order to reduce the probe-sample convolution effect.The resulting surface is shown in Fig. 11C together with the estimated probe.Reconstructed surface is used to form the mesh, probe is modeled using the results obtained in step 1.Surface and probe conductivity is set to have silicon material properties, with thin native SiO2 layer on the top of the surface.Virtual SThM image is computed pixel by pixel using the Finite Difference Method.The result is shown in Fig. 11D.We can see that the thermal signal variations are properly modeled using the above approach.If we change the sample thermal conductivity, the whole transferred power decreases, however the ratio between the artefacts and the average signal value stays the same.This result was also observed on the step height sample simulations and on preliminary studies using FEM so the use of samples with significantly higher or lower conductivity, e.g. during the instrument calibration does not lead to decrease of the influence of topography artefacts.The last used approach — the use of the SThM topography for building a mesh for the calculation of SThM artefacts is valid only if a true surface of the sample can be determined.In case of significant distortion due to the probe-sample convolution this cannot be done.This is the case of the step height measurement where there is an area of double touch between probe and sample while the tip is scanning across the edge.This is the area where part of the information is lost and cannot be restored using any algorithm.The roughness example shown above is not that critical, however even here some discrepancy between the real surface shape and what we get from SThM probe measurements can be found.To see this we have imaged the same area of the sample by SThM probe used in this study and with a brand new AFM probe in the ScanAsyst® mode.The comparison of a small surface area is in Fig. 12.As the SThM probe is a bit bigger than the AFM one and was also already used for some time before this study, we can consider the AFM image as the true surface.We can see that the surface reconstruction of the SThM topography data clearly helps to get the typical surface features originating from anodic etching which look like a network of small protrusions.These protrusions are a bit wider than what we get from AFM data which is due to imperfection of the estimated probe shape, however we can conclude that for this class of surfaces the reconstructed topography is good approximation of the true one.To improve the estimated probe shape it might be useful to consider not only the topography channel, but also the thermal channel during the blind tip estimation phase, which could also provide additional information of the heat transfer mechanisms as suggested in Ref. , but this was not the primary scope of this article.An approach for fast calculations of the heat transfer between a SThM probe and a sample was presented.It is based on assumptions that diffusive heat transfer plays a key role in the whole system and therefore the problem can be reduced to Poisson’s equation solution.For this a Finite Difference method is used which allows fast calculation of very similar problems that need to be solved for each probe position when it moves across the surface.Whole virtual SThM image is then computed.This can be used to estimate and/or eventually correct the topography artefacts arising from probe-sample geometry variations during the scan.The described approach was tested on a custom built software using graphics card for fast parallel calculations.The biggest difference from the use of general commercially available Finite Element Method packages is in the simplicity of the mesh which leads to an easy reuse of the mesh and computational results for the computation of the adjacent pixels in the resulting image.
We present an approach for simulation of topography related artefacts in local thermal conductivity measurements using Scanning Thermal Microscopy (SThM). Due to variations of the local probe-sample geometry while the SThM probe is scanning across the surface the probe-sample thermal resistance changes significantly which leads to distortions in the measured data. This effect causes large uncertainty in the local thermal conductivity measurements and belongs between most critical issues in the SThM when we want to make the technique quantitative. For a known probe and sample geometry the topography artefacts can be computed by solving the heat transfer in the SThM for different probe positions across the surface, which is however very slow and limited to single profiles only, if we use standard tools (like commercially available Finite Element Method solvers). Our approach is based on an assumption of diffusive heat transfer between the probe and the sample surface (and within them) and on the use of a Finite Difference solver that is optimized for the needs of a simulated SThM images computing. Using a graphics card we can achieve computation speed that is sufficient for a virtual SThM image generation on the order of few hours, which is already sufficient for practical use. We can therefore use the measured sample topography and convert it to a virtual SThM image – which can be then e.g. compared to real measurement or used for artefacts compensation. The possibility of performing fast simulations of topography artefacts is also useful when uncertainties of the SThM measurements are evaluated.
469
MICU1 Serves as a Molecular Gatekeeper to Prevent In Vivo Mitochondrial Calcium Overload
Calcium entry into the mitochondria is critical for cellular homeostasis and is believed to modulate bioenergetic capacity and help determine the threshold for cell death.For over 50 years, it has been appreciated that isolated mitochondria could rapidly take up calcium, using the large mitochondrial membrane potential across the inner mitochondrial membrane as a driving force for the entry of the ion.Subsequent studies demonstrated that this uptake was highly selective for calcium.These and other studies help define the biophysical properties of the inner mitochondrial membrane channel, now termed the mitochondrial calcium uniporter complex.Nonetheless, the molecular components of the uniporter complex remained elusive for over five decades.However, in the last several years, rapid progress has been made, including the molecular identification of a 40-kD inner mitochondrial membrane protein as the pore-forming protein of the uniporter complex.The composition of the uniporter complex is now believed to contain several additional components beyond the pore-forming MCU protein.These include the MCU paralog MCUb, a family of related EF-hand-containing proteins, and a small 10-kD protein, EMRE.The composition of the uniporter complex appears to differ between various cell lines and between different tissues; for instance, the expression of MICU3 appears to be largely confined to the brain, and the relative ratio of MCU and MCUb appears to markedly differ in various organs.These differences in composition might, in turn, be important for the observed tissue-specific differences in uniporter activity.Although the recent molecular identification of MCU has provided significant insight into the regulation of mitochondrial calcium entry, numerous questions persist.Experimental evidence suggests that rates of calcium entry through the uniporter are sigmoidal, with slow rates of calcium uptake at low extra-mitochondrial calcium concentrations and faster uptake when calcium concentrations begin to exceed 10–15 μM.This sigmoidal behavior is critical because this endows resistance to calcium overload at resting cytosolic calcium levels while still allowing the mitochondria to respond to a rise in cytoplasmic calcium induced by agonist stimulation.Although such gating behavior could result from the intrinsic properties of MCU itself, considerable attention has focused on other components of the uniporter complex.Particular emphasis has been placed on what inhibits calcium uptake at low calcium levels, providing the necessary gatekeeping functions of the uniporter and thus preventing the potentially disastrous consequences of calcium overload.Studies have suggested that both MICU1 and MICU2 might mediate this gatekeeping function.To date, most studies have relied on the behavior of permeabilized cell lines in which uniporter components were knocked down or used specific cell lines containing clustered regularly interspaced short palindromic repeats-mediated cellular knockouts.Analysis of these cellular data employing knockdown and knockout of MICU1 has, however, often revealed conflicting results.For instance, the role of MICU1 at high calcium concentrations has varied between studies, with some arguing that knockdown of MICU1 does not alter mitochondrial calcium uptake following agonist-induced calcium release.Others have suggested that cells lacking MICU1 are impaired in this capacity, whereas still others have demonstrated that the effects may be determined by the strength of the agonist stimulation.There are similar ambiguities with regard to whether genetic inhibition of MICU1 leads to resting calcium overload, again with some studies arguing that it does and others suggesting that it does not.It is likely that differences in experimental approaches, including differences in the degree of knockdown, as well as differences in the intrinsic composition of the uniporter complex in the various cell lines employed might at least partially explain these divergent results.Further insight into the regulation of uniporter activity has come from the recent description of a cohort of children who presented with a range of severe symptoms characterized by profound proximal skeletal muscle weakness accompanied by neurological features that included chorea, tremors, and ataxia.Subsequent genetic analysis identified that the afflicted children carried loss-of-function mutations in MICU1 that were inherited in an autosomal recessive fashion.Initial characterization of fibroblast cell lines derived from patients and controls were consistent with the hypothesis that loss of MICU1 leads to mitochondrial calcium overload.Although MICU1 deficiency is rare, the clinical phenotype described in these patients shares features with other, more common conditions, including muscular dystrophies as well as congenital and mitochondrial myopathies.These entities are often accompanied by mitochondrial dysfunction and damage.Indeed, mitochondrial calcium overload is being increasingly viewed as a common final pathway for a wide range of human pathologies.Most of these conditions, like MICU1 deficiency, have few therapeutic options.Here, we describe a mouse model in which we have deleted MICU1.These mice exhibit many of the characteristics recently observed in patients lacking MICU1 expression.Remarkably, although the absence of MICU1 leads to high perinatal mortality and marked abnormalities in young mice, many, but not all, of these defects improve with time.Older MICU1−/− mice also have improved mitochondrial calcium-handling parameters, suggesting an age-dependent remodeling of their uniporter complex.Consistent with this remodeling, older MICU1−/− mice demonstrate a decline in EMRE expression.To further assess the importance of this decline in EMRE expression, we generated mice with targeted deletions in the EMRE locus.Surprisingly, EMRE heterozygosity significantly ameliorates the MICU1−/− phenotype.These results clarify the molecular regulation of mitochondrial calcium influx and suggest potential strategies that might have therapeutic benefit in the growing number of conditions characterized by calcium overload.In an effort to better understand the molecular function of MICU1, we generated MICU1−/− mice using CRISPR-mediated methods.Four different founder mice, representing four independent genomic targeting events, were generated.Each line yielded similar results, and therefore, we have incorporated data from all four lines.As expected, mitochondria isolated from this MICU1−/− mouse lacked expression of MICU1 protein.Next, we asked whether deletion of MICU1 altered calcium uptake.Wild-type liver mitochondria loaded with a calcium-sensitive fluorophore showed little evidence of calcium uptake when challenged with a low concentration of extra-mitochondrial calcium.In contrast, the absence of MICU1 expression led to a significantly increased rate of calcium uptake under these conditions.In the setting of higher concentrations of extra-mitochondrial calcium, the absence of MICU1 appeared to reduce the rate of mitochondrial calcium entry.Similar alterations in calcium uptake were observed in MICU1−/− mitochondria isolated from other tissues such as brain.Thus, in purified mitochondria, the absence of MICU1 expression augments calcium uptake rates at low calcium concentrations and inhibits uptake rates at high calcium concentrations.We further validated these properties using mouse embryonic fibroblasts derived from embryos of MICU1−/− mice or their WT littermates.As observed in isolated mitochondria, MICU1−/− MEFs lacked discernable MICU1 expression.As seen with isolated mitochondria, permeabilized WT MEFs showed little uptake of calcium at low calcium concentrations.In contrast to WT MEFs, MICU1−/− MEFs displayed increased rates of calcium uptake under these same conditions.When MICU1 expression was reconstituted in MICU1-deficient cells, WT calcium uptake properties were restored.Next, we sought to characterize the phenotype of mice lacking MICU1 expression.Breeding of heterozygous MICU1+/− mice resulted in significantly fewer MICU1−/− mice than expected.Analysis of over 1,300 births demonstrated that only one in roughly every six or seven MICU1−/− animals was able to survive beyond the first post-natal week.In contrast, examination of litters from late embryogenesis to immediately after birth demonstrated that, although late-stage MICU1−/− embryos were slightly smaller than WT littermates, there appeared to be no embryonic selection against MICU1−/− progeny.Indeed, when we analyzed a total of 60 late-stage embryos/post-natal day 1 pups, we observed the precise number of mice expected in each genotype.In contrast, although it is not unusual for some pups to die in the first few days after birth, when we assessed the litters of MICU1+/− crosses, 96 of 138 spontaneous perinatal deaths were of the MICU1−/− genotype.Thus, although MICU1 expression appears to be largely dispensable for development, the absence of MICU1 results in a high, but not complete, perinatal death rate.These observations contrast slightly from a very recent report demonstrating that MICU1 deletion resulted in 100% perinatal mortality.In that report, the authors observed that MICU1−/− mice exhibited a decreased number of specific brainstem neurons known to regulate respiration.Although this decrease was just a trend, other models where deletion of critical mitochondrial proteins leads to perinatal mortality have also shown impaired neuronal innervation of the diaphragm.In that context, we also observed a modest trend for a decreased number of cervical motor neurons required for respiration in MICU1−/− late-term embryos, although it remains unclear to what degree this contributed to the high rate of perinatal death.At 1 week of age, surviving MICU1−/− mice appeared underdeveloped and smaller and weighed roughly 50% less than their WT littermates.Young MICU1−/− mice, like children bearing mutations in MICU1, appear to suffer from ataxia.Functional analysis revealed that 1-month-old MICU1−/− mice had severely impaired performance on a balance beam.A closer histological examination of the cerebellum revealed abnormal persistence of the outer granular layer in 12-day-old mice lacking MICU1 expression.In addition, the overall cerebellum of young MICU1−/− mice appeared underdeveloped.Consistent with their observed neurological defects, MICU1−/− mice also exhibited alterations in the post-natal arborization of Purkinje cells.Given that patients lacking MICU1 develop proximal myopathies, we also assessed muscle strength in MICU1−/− mice.Histologically, we did not observe any evidence of central nuclei or marked changes in the abundance of fast and slow fibers in the skeletal muscle of MICU1−/− mice.Similarly, histochemical enzyme assays of succinate dehydrogenase and cytochrome c oxidase activity revealed no significant differences between WT and MICU1−/− muscle fibers, except that, consistent with overall body size, the MICU1−/− fibers were considerably smaller.Nonetheless, 4-week-old MICU1−/− mice were markedly impaired in an inverted grid test of skeletal muscle function as well as in their performance in a wire hang test, another measure of skeletal muscle strength and coordination.Thus, these mice appear to exhibit many of the neurological and myopathic defects observed in MICU1-deficient patients.We also, however, observed additional properties not formally assessed in human patients.Although analysis of splenic and thymic T cell maturation revealed no discernable alterations, the numbers of splenic B cells were markedly reduced in MICU1−/− mice.This decrease in overall B cell number was consistent with an observed increase in cell death observed in MICU1−/− B cells.The more pronounced effects observed in B cells, compared with T cells, may reflect intrinsic differences in MICU1 expression between these two cell types.Next, we sought to better understand the biochemical basis underlying the various phenotypic alterations observed in MICU1−/− mice.Given our observations that, depending on the calcium concentration, MICU1 deletion has either positive or negative effects on calcium uptake, the net effect on resting mitochondrial calcium levels was difficult to predict a priori.Therefore, we first sought to directly measure in situ matrix mitochondrial calcium levels.This analysis revealed that mitochondria from young MICU1−/− mice had an approximate 1.5-fold increase in their resting levels of mitochondrial matrix calcium.This argues that the predominant in vivo effect of MICU1 is to function as a gatekeeper because, in the absence of this molecule, mitochondria exhibit tonic calcium overload.A number of abnormalities have previously been associated with calcium overload, including alterations in mitochondrial morphology and increased generation of reactive oxygen species.Consistent with these past observations in other systems, electron micrographs revealed that MICU1−/− mice had marked alterations in skeletal muscle mitochondrial morphology.Similar alterations were seen with mitochondria in the brain.In addition, we noted that ATP levels were also significantly reduced in MICU1−/− skeletal muscle, although alterations in resting ATP levels were not as evident in the brain.Another common hallmark of mitochondrial dysfunction is a rise in lactate levels, a feature also evident in the skeletal muscle of MICU1−/− mice.We saw a similar increase in blood lactate levels.To assess tissue ROS levels, we took advantage of our previous observations of B cell defects in MICU1−/− mice.Using the redox-sensitive fluorophore 2’,7’-dichlorofluorescin diacetate, we noted a significant increase in the ROS levels observed in MICU1−/− B cells.A similar elevation in ROS levels was also seen in thymus-derived T cells.Mitochondrial uncoupling abrogated the difference in ROS levels between WT and MICU1−/− cells, consistent with a mitochondrial source for the elevated levels of ROS observed in MICU1−/− cells.Although the survival and appearance of newborn and young MICU1−/− mice was markedly impaired, we noted that surviving MICU1−/− mice appeared to improve over time.This improvement was evident in appearance and overall body weight.Although MICU1−/− mice remained smaller than their WT littermates, these differences narrowed considerably as the animals aged.With this phenotypic improvement, we noted that differences in resting calcium, ATP, and muscle lactate were no longer significantly different between older WT and MICU1−/− mice, although a trend toward increased resting calcium, decreased resting ATP, and increased lactate remained.Similarly, the differences in B cell abundance and ROS levels observed in young MICU1−/− mice were no longer evident as these animals aged.Histological assessment of the cerebellum of MICU1−/− mice revealed that previous abnormalities, such as a persistent outer granular layer, resolved in the brains of older MICU1−/− mice.This suggests that MICU1−/− mice experience a developmental delay, rather than a complete block, of this post-natal cerebellar process.Although many of the histological and biochemical parameters improved in the older MICU1−/− mice, we did, however, continue to see persistent functional defects in both neurological and skeletal muscle function.In addition, as these animals aged, new abnormalities arose.For instance, electron micrographs of skeletal muscle revealed the presence of tubular aggregates.Interestingly, in humans, this relatively rare myopathic abnormality has been recently described to result from alterations in skeletal muscle calcium handling because of patients inheriting dominant mutations in stromal interaction molecule 1, an endoplasmic reticulum calcium sensor.In an effort to explain why some of the parameters improved in MICU1−/− mice as they aged, we re-evaluated calcium uptake parameters from mitochondria derived from the liver of older MICU1−/− animals.Although the defects in calcium uptake at high calcium concentrations remained essentially unchanged, we noted that the loss of gatekeeping function at low calcium concentrations was now less marked.This suggested that some age-dependent remodeling of the uniporter complex might have occurred.We therefore examined the levels of MCU and EMRE proteins in the livers of young and old WT and MICU1−/− mice.We noted that, at 2 weeks of age, surviving MICU1−/− animals had similar levels of MCU as WT animals but had a reduced EMRE-to-MCU expression ratio.Interestingly, at 7 months of age, the EMRE/MCU expression ratio had fallen even lower in these now phenotypically improved older MICU1−/− mice.Previous observations with EMRE knockdown in cultured cells suggest that EMRE functions as a scaffold for MCU and is required to maintain channel opening.Our observation that reduced expression of EMRE in older MICU1−/− mice correlated with phenotypic improvement suggested that MICU1−/− mice might remodel their uniporter complex by reducing EMRE expression.Such remodeling would likely limit uniporter opening and, hence, help prevent calcium entry and in vivo calcium overload.To test this hypothesis, we reasoned that genetically reducing EMRE expression might therefore provide a benefit in the setting of MICU1 deficiency.Using CRISPR-mediated methods, we generated additional mice containing targeted deletions in the EMRE locus located on chromosome 15 of the mouse.We crossed MICU1+/−EMRE+/− mice to find out whether EMRE deficiency could rescue the perinatal mortality of MICU1−/− mice.As expected, only a small fraction of MICU1−/−EMRE+/+ mice survived into the first week.Similarly, to date, we have not observed any surviving MICU1−/−EMRE−/− mice; however, in this mixed genetic background, we were able to generate MICU1+/+EMRE−/− mice.Mice lacking EMRE but having wild-type MICU1 appeared to have normal body weight and exhibited no evidence of ataxia or defects in skeletal muscle function.Remarkably, MICU1−/− EMRE+/− mice were observed at nearly the expected frequency at weaning.We saw a similar capacity to rescue MICU1-deficient animals using a second, independent, CRISPR-generated EMRE-deficient mouse line.Interestingly, young MICU1−/− EMRE+/− mice had reduced hepatic levels of EMRE expression compared with MICU1−/− mice.The overall appearance and weight of the MICU1−/− EMRE+/− mice was indistinguishable from WT mice.Based on this genetic rescue, we next asked whether, in the context of MICU1 deletion, deleting one allele of EMRE resulted in improved calcium uptake.We observed that liver mitochondria from young MICU1−/− EMRE+/− mice still had slightly impaired gatekeeping function at low extra-mitochondrial calcium levels.Under these low-calcium conditions, MICU1−/− EMRE+/− mitochondria had roughly 2-fold higher rates of calcium uptake than WT mitochondria.Nonetheless, this defect was markedly reduced compared with the nearly 6-fold difference observed previously in MICU1−/− mitochondria.At high calcium concentrations, a situation in which MICU1−/− mitochondria demonstrate reduced rates of calcium uptake, EMRE heterozygosity resulted in a modest but further reduction in this property.These results are consistent with a role for EMRE in maintaining channel opening because, in the absence of MICU1 expression, deletion of one allele of EMRE reduced EMRE expression and resulted in reduced calcium uptake at both low and high extra-mitochondrial calcium levels.Interestingly, by itself, EMRE heterozygosity appeared to have no significant effect on hepatic EMRE levels or on calcium uptake.This suggests that, in the setting of wild-type levels of the other uniporter complex components, one allele of EMRE is sufficient to maintain the required level of protein expression.We cannot, however, exclude the possibility that the reason why the effects of deleting one allele of EMRE is only evident in the context of MICU1 deletion relates to the recent observation that EMRE’s activity can be regulated by a rise in matrix calcium levels, a situation that presumably occurs in MICU1−/−, but not WT, mitochondria.Consistent with the observed reduction in calcium uptake at low calcium levels, brain matrix calcium levels appeared to be similar between WT and MICU1−/− EMRE+/− mitochondria isolated from young mice.Skeletal muscle mitochondrial morphology was also apparently normalized in these mice.Similarly, although skeletal muscle ATP levels trended slightly lower than seen in WT mice, this difference was not statistically significant.Young MICU1−/− EMRE+/− animals also exhibited age-appropriate cerebellar morphology and an amelioration of the observed B cell alterations.Moreover, compared with MICU1−/− mice, MICU1−/− EMRE+/− mice exhibited improvement in their performance on the balance beam, although they were still impaired compared with WT animals.Similarly, skeletal muscle strength was still impaired in the MICU1−/− EMRE+/− mice, although, again, these animals performed slightly better than MICU1−/− animals.Our results significantly clarify the in vivo function of MICU1.In particular, using isolated WT and MICU1−/− mitochondria, we demonstrate that the absence of MICU1 increases mitochondrial calcium uptake rates at low calcium concentrations and reduces calcium entry rates at high calcium concentrations.Nonetheless, although these changes in gating properties could theoretically lead to either increased or decreased mitochondrial calcium levels, our data suggest that, in vivo, the primary function of MICU1 is to function as a molecular gatekeeper.In particular, in the absence of MICU1, mitochondrial matrix calcium levels are increased, and the deleterious consequences of this increase are evident in a wide range of abnormalities, including alterations in mitochondrial morphology, changes in ATP, and elevation of lactate levels.Further proof of the loss of gatekeeping function comes from analyzing animals bearing the MICU1−/−EMRE+/− genotype.Compared with MICU1−/− mitochondria, the mitochondria derived from these animals have reduced EMRE expression and slower rates of calcium uptake at both low and high extra-mitochondrial calcium concentrations.Thus, in the setting of MICU1 deletion, deletion of one allele of EMRE appears to restrict uniporter opening, thereby helping to prevent MICU1-induced calcium overload.The observation that EMRE heterozygosity also markedly improves the overall survival of MICU1−/− mice provides strong genetic proof that the primary in vivo function of MICU1 is to serve as a gatekeeper of the uniporter complex.Our data suggest that MICU1 is not required during embryogenesis.However, immediately after birth, the absence of MICU1 induces high rates of perinatal mortality, with the majority of MICU1−/− mice dying in the first 48 hr.Indeed, based on a large number of births from four independent founder lines, we estimate that, in a C57BL/6N background, only one in six to seven MICU1−/− mice survive beyond 1 week.In contrast, while this manuscript was under review, another group reported that MICU1 deletion resulted in complete perinatal mortality.These survival differences might relate to subtle differences in the animal facilities or, more likely, to the known mitochondrial differences between C57BL/6J and C57BL/6N sub-strains.Remarkably, our mice that did survive past 1 week function surprisingly well and actually appear to improve with time.We have currently observed MICU1−/− mice up to 1 year of age, and the only additional phenotype that has emerged is the appearance of chorea-like movements, a feature seen in human patients as well.This would suggest that the absence of MICU1 expression is particularly critical immediately after birth.This may relate to the essential role of the mitochondria in the transition from the relatively hypoxic environment of the placenta to the oxygenated environment found after birth.We do not have a clear understanding of why the functional defects in the surviving MICU1−/− mice seem to be most pronounced in skeletal muscle and the brain.In that regard, additional studies are needed to understand these tissue-specific effects.Comparisons of calcium uptake in different tissues, particularly between excitatory tissues such as skeletal muscle, brain, and heart versus non-excitatory tissues such as liver and kidney, might therefore be informative.The gradual improvement of MICU1−/− mice over time perhaps suggests that mitochondria are capable of undergoing some level of functional remodeling.Such remodeling has been demonstrated recently in animal models in which MCU activity is reduced and may also explain the more pronounced phenotype in mouse models of acute MCU deletion in adult animals when contrasted to models in which MCU is deleted or inhibited throughout embryogenesis.In the case of MICU1 deletion, this remodeling appears to involve alterations in the expression ratio of MCU to EMRE.It will be interesting to discern whether this remodeling represents some form of retrograde signaling from the mitochondria to the nucleus.There is increasing evidence of the importance of retrograde signaling whereby damaged or stress mitochondria emit a poorly characterized signal that elicits a protective nuclear response.In that regard, it is of interest to note that some previous studies have implicated calcium as a mediator of the mammalian retrograde response.Finally, there is increasing realization that mitochondrial calcium overload might be a final common pathway in a multitude of disease conditions, including various myopathies, certain neurodegenerative diseases, models of heart failure, and ischemic tissue injury.Therefore, strategies that modulate mitochondrial calcium uptake might have wide therapeutic potential.The demonstration here that modulating EMRE expression is capable of markedly improving the overall survival as well as the biochemical, neurological, and myopathic phenotype of MICU1−/− mice suggests that efforts to modulate the uniporter complex might be an effective therapeutic avenue for the growing number of disease states characterized by calcium overload.The MICU1 and EMRE knockout mice were generated using the CRISPR/Cas9 method as reported previously.Briefly, two single guide RNAs were designed to target each gene, one near the translation initiation codon and the other one further downstream within the same exon.Specifically, the nucleotide sequences for these sgRNAs were as follows: MICU1-sgRNA 1, TGTTAAGACGAAACATCCTG; MICU1-sgRNA 2, TCTGCAGTGACTGCAAGTAC; EMRE-sgRNA 1, GGAGCTGGAGATGGCGTCCA; and EMRE-sgRNA 2, GCCTGGGTTGCAGTTCGACC.These sequences were cloned into a sgRNA vector using OriGene’s gRNA cloning services and were then used as templates to synthesize sgRNAs using the MEGAshortscript T7 kit.Cas9 mRNA was in vitro-transcribed from plasmid MLM3613 using the mMESSAGE mMACHINE T7 Ultra kit.For microinjection, Cas9 mRNA was mixed with either one or both sgRNA for each gene and then microinjected into the cytoplasm of fertilized eggs collected from either C57BL/6N inbred or B6CBAF1/J F1 hybrid mice.The injected zygotes were cultured overnight in M16 medium at 37°C in 5% CO2.The next morning, embryos that had reached the two-cell stage of development were implanted into the oviducts of pseudopregnant foster mothers.The mice born to the foster mothers were genotyped using PCR and DNA sequencing.The MICU1 mice were maintained on a C57BL/6N or C57BL/6NxJ F1 background, and experimental mice were obtained by breeding the mice using heterozygous crosses.Genotyping for MICU1 was performed using the following primers: 5′-CTGAGGCCAATTAACTGC-3′ and 5′-GACACGACTAGCTGATAAACCT-3′.MICU1 heterozygous mice were bred to EMRE heterozygous mice to generate double-heterozygous breeders.Genotyping for EMRE was performed using the following primers: 5′-ACCGGCATACGAATGCGTGCTC-3′ and 5′-ACCGGCCTCAATCCCTTCGTC-3′.All animal studies were done in accordance with and approval of the National Heart, Lung, and Blood Institute Animal Care and Use Committee.MEFs were prepared using standard methods from embryonic day 12–E14 embryos resulting from MICU1 heterozygous breeders.MEFs were cultured in growth medium consisting of DMEM supplemented with 15% fetal bovine serum, 50 U/ml penicillin, and 50 μg/ml streptomycin.An epitope-tagged expression vector encoding MICU1 was generated using a C-terminal FLAG epitope in-frame with MICU1 derived from wild-type MEF cDNA.The construct was inserted into pCMV-Tag 4A using the following primers: 5′-CGC GGA TCC ATG TTT CGT CTT AAC ACC CTT TCT GCG-3′ and 5′-GCA TGT GAT ATC TTT GGG CAG AGC AAA GTC CCA GGC-3′.The construct was then inserted into the lentiviral expression vector pLVX-puro by using the following primers: 5′-TAG AAT TAT CTA GAG TCG CGG GAT CCG ACT CAC TAT AGG GCG AAT TGG G-3′ and 5′-CAG ATC TCG AGC TCA AGC TTC GAT TAA CCC TCA CTA AAG GGA ACA AAA GC-3′.Lentiviruses were produced in 293T cells and concentrated by ultracentrifugation by standard methods.MEFs were infected with a lentivirus expressing epitope-tagged MICU1 or empty pLVX vector at an estimated MOI of 0.5 and subsequently selected with puromycin.All constructs were verified by DNA sequencing.Mitochondria were isolated by standard differential centrifugation protocol.Tissues were first minced in isolation bufferpropanesulfonic acid, 0.5 mM EGTA, 2 mM taurine, pH adjusted to 7.25) and then homogenized using a Glas-Col homogenizer at 1,800 rpm for 8–12 strokes.The mixture was centrifuged for 5 min at 500 × g, and the supernatant was removed and centrifuged again.The supernatant was then centrifuged at 11,000 × g for 5 min to pellet the mitochondria, which were washed once more with isolation buffer and resuspended.Protein content was measured using a bicinchoninic acid assay protein assay.Measurement of mitochondrial Ca2+ content was performed as described previously, with minor adjustments as delineated below.In brief, brain mitochondria were isolated as described above in the presence of 3 μM Ru360 and washed in isolation buffer without EGTA.Mitochondria were pelleted and diluted in 0.6 N HCl, homogenized, and sonicated.Samples were heated for 30 min at 95°C and then centrifuged for 5 min at 10,000 × g.The supernatants were recovered, and the Ca2+ content in the supernatants was determined spectrophotometrically using the O-Cresolphthalein Complexone calcium assay kit.Only samples that fell within the range of the standard curve were included for analysis, and all values were normalized to the calcium levels observed in WT mitochondria.Calcium uptake in mitochondria and MEFs was assayed similarly to what has been described previously.Briefly, isolated mitochondria and MEFs were resuspended in a buffer containing 125 mM KCl, 2mM K2HPO4, 10 μM EGTA, 1 mM MgCl2, 20 mM HEPES, 5 mM glutamate, and 5 mM malate.For MEFs, 0.004% digitonin was added to permeabilize the cells.The fluorescent cell-impermeable Ca2+ indicator Fluo4 or Calcium Green-5N was added to the buffer.Fluorescence was measured at 506-nm excitation and 532-nm emission on an Omega plate reader.Experiments were initiated by injecting 5 μM or 25 μM CaCl2, resulting in an approximate free calcium concentration of 0.5 μM or 16 μM free , respectively.Calcium uptake curves were normalized to baseline and maximum, and the relative rate of calcium uptake was calculated as the absolute value of the slope from linear regression fit in the linear range of the fluorescent signal.Relative rates of calcium uptake were normalized to the WT.J.C.L., J.L., K.M.H., S.M., R.J.P., and M.M.F. performed the experiments.Z.Y. and C.H. helped with the pathological analysis.D.A.S. performed the neurological and skeletal muscle phenotyping.C.L. constructed the mouse models.E.M. and T.F. conceived the project.J.C.L. and T.F. wrote the manuscript.
MICU1 is a component of the mitochondrial calcium uniporter, a multiprotein complex that also includes MICU2, MCU, and EMRE. Here, we describe a mouse model of MICU1 deficiency. MICU1−/− mitochondria demonstrate altered calcium uptake, and deletion of MICU1 results in significant, but not complete, perinatal mortality. Similar to afflicted patients, viable MICU1−/− mice manifest marked ataxia and muscle weakness. Early in life, these animals display a range of biochemical abnormalities, including increased resting mitochondrial calcium levels, altered mitochondrial morphology, and reduced ATP. Older MICU1−/− mice show marked, spontaneous improvement coincident with improved mitochondrial calcium handling and an age-dependent reduction in EMRE expression. Remarkably, deleting one allele of EMRE helps normalize calcium uptake while simultaneously rescuing the high perinatal mortality observed in young MICU1−/− mice. Together, these results demonstrate that MICU1 serves as a molecular gatekeeper preventing calcium overload and suggests that modulating the calcium uniporter could have widespread therapeutic benefits.
470
Ictal source imaging and electroclinical correlation in self-limited epilepsy with centrotemporal spikes
Self-limited epilepsy with centrotemporal spikes is the most common syndrome of idiopathic focal epilepsy in children .Seizures typically occur during sleep or awakening.Since seizure frequency is usually low, there are scarce reports on the ictal EEG pattern in this syndrome; most of the papers are case reports or small case series.The largest number of patients was reported by Capovilla and co-workers .They retrospectively collected 30 patients with ictal recordings, and they identified four types of ictal patterns, the most common being the low-voltage fast activity.However, the precise anatomic location of the ictal activity, and the temporal correlation between the ictal EEG and the semiological manifestation have not been systematically addressed yet.Source imaging of ictal EEG activity faces additional challenges, due to artefacts which are often superimposed on the ictal EEG activity and due to the rapid propagation .However, advances in signal analysis have made it possible to develop standardised methods for ictal EEG source imaging, that were validated in patients with therapy-resistant focal epilepsy who underwent surgery .Here, we report ictal EEG source imaging and electroclinical correlation in three patients with self-limited epilepsy with centrotemporal spikes.To the best of our knowledge, this is the first report on ictal source imaging in this syndrome.Video-EEG recordings from three patients with self-limited epilepsy with centrotemporal spikes, who had spontaneous seizures during recording, were analysed.All patients were referred to EEG on clinical indications, and all parents gave their informed consent for the recordings and for analysis and post-processing of the recorded data, for scientific purpose.EEGs were recorded using an extended version of the IFCN 10–20 array, with six additional electrodes in the inferior temporal electrode chain .Table 1 summarises the demographic and clinical data of the patients.Diagnosis was based on the ILAE criteria .Ictal source imaging was performed using the method described and validated in previous studies .Briefly: rhythmic ictal activity at the onset of the seizures was identified visually and using density spectral array.The initial part of the rhythmic activity was defined using Fast Fourier Transform in sliding windows with 50% overlap, independent component analysis, and inspection of the voltage map of each ictal wave.Averaged ictal waveforms were analysed using two different inverse methods: discrete multiple dipoles fitting, and a distributed source model in the brain volume, i.e., classical LORETA analysis recursively applied .As head model, we used a finite-elements model in BESA-MRI, with age-matched templates.BESA Research 6.1 was used for the signal analysis.Video-recordings were analysed and reported by three trained clinical neurophysiologists, with more than 10-year experience in long-term video-EEG monitoring.The EEG pattern showed similar features in all three patients.In the period preceding the seizures, an increase in the occurrence of right centrotemporal sharp-and-slow wave discharges was observed, leading to quasi-rhythmic trains of 1–2 Hz frequency.In patient 2, a second pre-ictal focus of sharp-and-slow-waves occurred in the left central and mid-central area.Two to ten seconds before the start of the clinical seizure, the pre-ictal sharp-and-slow-wave activity was replaced in the right centrotemporal region by evolving rhythmic ictal activity, starting with low-amplitude, 9.7–13.5 Hz frequency, and gradually increasing in amplitude and decreasing in frequency to 6–8 Hz.Source imaging of the ictal activity localized to the right operculo-insular area.Equivalent current dipoles localized to the opercular part of the right central area, with exception of the ictal activity in patient 3, where it was localized to the insula.Distributed source model localized to the right insula in all three cases.In patient 2, the second pre-ictal focus localized to the left mesial central area.Seizures started from the awake period in one patient, from drowsiness in the second patient and immediately following awakening in the third patient.EEG start preceded clinical start in all three cases.The first semiological manifestation was left perioral: tonic muscle activation causing mouth-deviation to the left.This was followed by arrhythmic myoclonic jerks, superimposed on the focal tonic muscle activation.These phenomena gradually extended from the perioral region to the left side of the face.The myoclonic jerks increased in frequency, became rhythmic at 9.5 Hz, 8.1 Hz and 8.2 Hz.These high frequency clonic jerks were time-locked to the contralateral EEG rhythmic activity with the same frequency.The seizures were fragmented, and they showed a fluctuating course with pauses of ictal activity in all three cases.The pauses consisted in total cessation of clinical and electrographic seizure activity, ranging from 0.4 to 7 s.Duration of their respective seizures was 33 s in patient 1, 45 s in patient 2, and 2 min and 21 s in patient 3.Previous studies using electromagnetic source imaging have predominantly analysed the location of the interictal epileptiform discharges in self-limited epilepsy with centrotemporal spikes .These studies reported sources of the irritative zone to be in the inferior part of the Rolandic area and the operculum .However, it has long been posited that the irritative zone might not be necessarily identical with the area that generates the seizures, and hence the source imaging of ictal activity should be obtained whenever possible.In this study, we have found that the source of ictal activity was in the operculum and insula.This is consistent with data from intracranial recordings in patients with therapy-resistant focal epilepsy, showing that seizures originating in the opercular rolandic area had semiology similar to patients with self-limited epilepsy with centrotemporal spikes .Furthermore, the frequency of the ictal rhythms recorded by intracranial electrodes in this area was in the alpha and lower beta range , which is similar to the ictal rhythms we analysed in this study.The contralateral myoclonic jerks in our patients were time-locked to the rhythmic ictal activity we analysed, underlying the correlation between the observed activity in the operculo-insular area and the semiological phenomena.In keeping with our findings, a previous study using fluorodeoxyglucose-positron emission tomography also showed significant changes in the opercular areas in self-limited epilepsy with centrotemporal spikes .In addition, a study combining EEG source imaging and fMRI showed propagation of the interictal activity from the rolandic region corresponding to the hand and face area, to the operculum and insula .It has been previously suggested that involvement of insula likely explains the sensations of laryngeal constriction and choking that is often reported by patients with self-limited epilepsy with centrotemporal spikes .It is of particular note that in our patients a fluctuating, fragmented course was recorded, with complete pauses of ictal EEG and motor activity.Similar fluctuating course has previously been described in patients with psychogenic non-epileptic seizures .Our findings suggest that a frank centrotemporal ictal activity might also show intermittent progression.To avoid misdiagnosis, it is important to emphasise that such seizure-dynamics can occur in rolandic seizures too.To the best of our knowledge, this is the first study on ictal source imaging in patients with self-limited epilepsy with centrotemporal spikes.Our findings emphasise the importance of the operculo-insular network for the ictogenesis in this syndrome.None of the authors has any conflict of interest to disclose.
Purpose To elucidate the localization of ictal EEG activity, and correlate it to semiological features in self-limited epilepsy with centrotemporal spikes (formerly called “benign epilepsy with centrotemporal spikes”). Methods We have performed ictal electric source imaging, and we analysed electroclinical correlations in three patients with self-limited epilepsy with centrotemporal spikes. Results The source of the evolving rhythmic ictal activity (9.7-13.5 Hz) localized to the operculo-insular area. The rhythmic EEG activity was time-locked to the contralateral focal motor seizure manifestation: facial rhythmic myoclonic jerks, with the same frequency as the analysed ictal rhythm. In all three patients, the seizures had fluctuating course with pauses of clinical and electrographic seizure activity, ranging from 0.4 to 7 s. Conclusion Source imaging of ictal EEG activity in patients with self-limited epilepsy with centrotemporal spikes showed activation of the operculo-insular area, time-locked to the contralateral focal myoclonic jerks. Fragmented seizure dynamics, with fluctuating course, previously described as a hallmark in patients with psychogenic non-epileptic seizures, can occur in rolandic seizures.
471
The impact of early- and late-onset preeclampsia on umbilical cord blood cell populations
Preeclampsia is a heterogeneous disease with early-onset and late-onset PE as the main phenotypes.Due to inadequate spiral artery remodelling with suboptimal placental perfusion, excessive amounts of oxidative stress can lead to an enhanced release of syncytiotrophoblast microparticles and cytokines, which particularly contributes to the pathogenesis of the more severe EOPE phenotype.In contrast, LOPE shows a relatively normal initial placentation and is associated with conditions that enhance excessive oxidative stress and placental inflammation later in pregnancy, such as obesity and pre-existing hypertension.Circulating syncytiotrophoblast microparticles can induce an increased maternal systemic inflammatory response with increased numbers of neutrophils and total leukocytes in maternal peripheral blood.The impact of PE on newborn umbilical cord blood cell populations, however, has been scarcely studied.During pregnancy, haematopoiesis takes place in the yolk sac, liver, bone marrow as well as in the placenta, generating all blood cell types from a small population of pluripotent hematopoietic stem cells as pregnancy advances.We hypothesise that PE, in particular EOPE, deranges fetal haematopoiesis resulting in heterogeneity of UCBC populations, and investigated the associations between UCBC counts and differentials in early- and late-onset PE.Study design Between June 2011 and June 2013 we included pregnant women in a prospective hospital-based periconceptional birth cohort: The Rotterdam Periconceptional Cohort, at the Erasmus MC, University Medical Centre Rotterdam, The Netherlands.For the current secondary cohort analysis, we selected EOPE and LOPE as cases and uncomplicated pregnancies as controls.To adjust for the often accompanied fetal growth restriction and iatrogenic preterm birth in PE, we oversampled the uncomplicated control group with FGR and PTB as complicated controls.Pregnancies were included in the cohort during the first trimester or after the first trimester when they were referred to our hospital.PE was defined according to the International Society for the Study of Hypertension in Pregnancy as gestational hypertension of at least 140/90 mmHg accompanied by an urine protein/creatinine ratio of ≥30 mg/mmol, arising de novo after the 20th week of gestation.EOPE was defined when PE was diagnosed before 34 weeks of gestation, LOPE when diagnosed after 34 weeks of gestation.Uncomplicated control pregnancies were defined as pregnancies without the presence of PE, gestational hypertension, FGR or PTB.FGR inclusion was based on an estimated fetal weight below the 10th percentile for gestational age based on ultrasound measurements performed between 20 and 38 weeks of gestation.Birth weight percentiles were calculated using the reference curves of the Dutch Perinatal Registry to validate birth weight ≤10th percentile and exclude those newborns with birth weight >10th percentile.Spontaneous preterm deliveries between 22 and 37 weeks of gestation were defined as PTB.Women with HIV infection, age <18 years and insufficient knowledge of the Dutch language could not participate and pregnancies complicated with a fetal congenital malformation and twins were excluded for the current study.Maternal comorbidity was defined by any concurrent cardiovascular-, heamatologic-, endocrine-, metabolic-, auto-immune- or renal disease.Maternal and fetal characteristics were obtained from hospital medical records.All women gave written informed consent before participation and written parental informed consent was obtained for the child.Ethical approval was given by the Erasmus MC, University Medical Centre Research Ethics Board.Umbilical cord blood samples from the umbilical vein were obtained in vacutainer tubes, immediately after delivery and clamping of the umbilical cord.Samples were transported at room temperature and subjected to flow cytometric analysis within 48 h after delivery to quantify erythrocytes, thrombocytes and leucocyte differentials.Between arrival at the Clinical Chemistry Laboratory and time of analysis, samples were stored at 4–8 °C.Quality of the blood cell counts was guaranteed by a manual check whereby flow cytometric data of suspect plots or reported system errors were excluded for further analysis.We used cell numbers/L for the analysis of leucocyte differentials and nucleated red blood cells, which is preferable to the widely used percentages of total leucocyte count, since the largely variable total leucocyte count could result in misleading percentages.The normal distributed maternal and newborn characteristics were tested using Analysis of Variance to detect overall differences between the groups, followed by the posthoc Dunnett t-test for pairwise comparisons of EOPE and LOPE with uncomplicated controls and FGR and PTB complicated controls.The Dunnett t-test limits the multiple testing problem by comparing each group to one reference group only.The Kruskall-Wallis-test was applied to all non-parametric maternal and newborn characteristics, followed by pairwise Mann-Whitney tests for posthoc comparisons.Log-transformation was applied to the non-parametric UCBC to achieve normal distributions of neutrophils, monocytes, eosinophils, basophils, NRBC and immature granulocytes.We converted zero values of neutrophils and NRBC into half of the lowest detectable value of the Sysmex haematology system, prior to log-transformation.Linear regression analysis was performed to investigate the association between UCBC counts and differentials and EOPE/LOPE versus the pooled group ofcomplicated controls.In the crude linear regression analyses, UCBC counts were estimated with group as the only predictive variable.In the adjusted multivariable analyses, gestational age and birth weight were additionally entered to the model as covariates, in formula: = β0 + β1group + β2GA + β3BW + ε.Here group is an indicator variable that is 1 for EOPE or LOPE and 0 for the pooled group ofcomplicated controls. represents the concentration of a certain UCBC population.All measurements were performed with IBM SPSS Statistics version 21.0.From the Predict Study we included all eligible women for this secondary cohort analysis that met the inclusion criteria as described earlier.After exclusion of 194 pregnancies due to missing blood samples or poor quality of blood cell counts, 218 pregnancies were included for analysis.Patients with missing data were characterised by a shorter gestational age and lower birth weight as compared to the final study population, and contained twice as much EOPE- and LOPE pregnancies, as depicted in Supplemental Table 1.The final study population comprised 23 cases of PE including 11 EOPE and 12 LOPE, and 195 controls, including 153 uncomplicated controls and 23 FGR and 19 PTB complicated controls.Maternal and newborn characteristics are shown in Table 1.In addition to the case specific parameters blood pressure, proteinuria, gestational age and birth weight, a significant lower mean maternal age in EOPE versus LOPE and uncomplicated controls was shown.EOPE pregnancies ended more often in a caesarean section compared to LOPE andcomplicated controls.Comorbidity was significantly different between the groups and highest in uncomplicated controls, but no significant differences were observed in the posthoc analysis.Neonatal temperature at birth was similar for each group.By ANOVA testing, we observed significantly lower cell counts for all UCBC populations and a significantly higher NRBC count in EOPE versuscomplicated controls.In LOPE only significantly higher neutrophil- and erythrocyte counts and lower reticulocyte counts compared to PTB complicated controls were observed.In Table 2 we show the results of the linear regression analyses of the UCBC counts and differentials of both EOPE and LOPE versus the pooledcomplicated controls.The crude estimates revealed that EOPE was associated with the decreased counts of total leucocytes; monocytes; neutrophils; eosinophils; immature granulocytes and thrombocytes.EOPE was associated with increased NRBC count.After adjustment for gestational age and birth weight, EOPE remained associated with decreased neutrophil count and increased NRBC count.The linear regression analyses did not reveal any significant association with LOPE versus thecomplicated control group.In this study we observed that pregnancies complicated by EOPE are associated with decreased leucocyte- and thrombocyte counts and with increased NRBC counts in umbilical cord blood.After adjustment for gestational age and birth weight, EOPE remained associated with decreased neutrophil and increased NRBC counts.Our findings demonstrate that the associations of most UCBC counts with EOPE are confounded by gestational age and birth weight, which is in agreement with previous studies.It revealed that LOPE compared to EOPE has a marginally impact on UCBC populations, which may be explained by its milder phenotype as well as the absence of FGR and PTB in this group.The 4–7-fold decrease of neutrophil count and 5-fold increase of NRBC count in association with EOPE however, were independent of gestational age and birth weight.Because the innate immune-system matures during pregnancy, this system of the newborn is prepared to be fully functional at birth by a sudden neutrophil rise during the late third trimester.The excessive oxidative stress from early pregnancy onwards might have affected UCB neutrophil counts in EOPE by generating enhanced inflammation in the fetal circulation, as demonstrated earlier by higher activated neutrophils and monocytes as well as increased CRP, α-1-antitrypsin and plasma chemokine levels.As a consequence, fetal endothelial cell dysfunction might occur, by which the maturation and development of fetal haematopoiesis can be affected.Fetal haematopoiesis originates from endothelial cells in the ventral aorta of the developing embryo and is thus extremely sensitive to endothelial damage.It has been suggested that the maternal endothelial cell damage is of more importance in EOPE than LOPE and that the excessive oxidative stress develops only towards the end of gestation in LOPE.This is in line with the observed difference in leucocyte counts between EOPE and LOPE.The association between PE and decreased UCB leucocyte count has been described before and is in agreement with our findings.Low neutrophil counts might result in a temporarily reduced immune capacity of the newborn, especially if the child is also born preterm.This might increase the vulnerability for infections.The observed increase of UCB NRBC in EOPE pregnancies is in line with earlier studies.However, Akercan- and Catarino et al. did not observe this increase independent of gestational age, which can be explained by the lack of separate analysis for EOPE and LOPE.High numbers of circulating NRBC can reflect an activation of erythropoiesis as a response to the placental ischemia-reperfusion phenomenon resulting from diminished and intermittent perfusion of the intervillous space or a compensation of the erythrocyte-damage, both more profoundly present in EOPE than in LOPE.The suboptimal placental perfusion results in a relatively hypoxic placental environment, which is beneficial for early invasion of the cytotrophoblast into the maternal decidua.However a prolonged hypoxic placental state may lead to an over-expression of hypoxia-inducible factor 1α, regulating several processes such as erythropoiesis.Placental over-expression of HIF-1α has been described in PE pregnancies, which may explain our finding of enhanced NRBC counts in umbilical cord blood, being a result of HIF-1α-induced erythropoietin-release.A strength of the study is that we investigated associations between PE and UCBC counts and differentials in EOPE and LOPE separately, which revealed a much stronger association between UCBC counts and EOPE, and is relevant concerning the different aetiologies of both.Moreover, associations were investigated independent of gestational age and birth weight.Pregnancies complicated by EOPE in our study population ended more often in a Caesarean section.This unfortunately resulted in more missing blood samples compared to LOPE due to the emergency of the Caesarean sections.Because of the sample size we were not able to adjust for many confounders and therefore inherent to an observational study residual confounding cannot be excluded.The wide confidence intervals demonstrate that the sample size also resulted in a limited power of the study.This implies that certain UCBC values with seemingly clinical relevant UCBC differences between groups might have failed to achieve statistical significance because of lack of power.Additionally, a selection bias due to the relatively high percentage of EOPE and LOPE pregnancies with missing data might be present, but this is an often occurring problem in high-risk patients where medical care is a priority.Another limitation of our study is the tertiary university hospital-setting, in which uncomplicated pregnancies presented with a relatively high percentage of concurrent comorbidity, for which they were referred.These patients were mostly included in the cohort study in the first trimester of pregnancy.Complicated PE, FGR and PTB pregnancies were more often included as late cohort inclusions after the first trimester.They presented with less additional comorbidity, as visualised in Supplemental Fig. 1.Two neonates only in EOPE and LOPE were complicated by FGR.Therefore, future studies may address differences in UCBC populations in a subgroup of early- and late-onset PE complicated by FGR.Derangements of fetal haematopoiesis, in particular of neutrophil- and NRBC counts, are associated with EOPE only.These findings imply potential impact on the future health of offspring, and that heterogeneity in UCBC should be considered as confounder in epigenetic association studies examining EOPE.Further investigation is needed to establish the potential impact for future health of offspring.The authors report no conflict of interest.This work was financially supported by the Department of Obstetrics and Gynaecology, Erasmus MC, University Medical Centre Rotterdam, The Netherlands.The funding source had no direct involvement in the realization of the manuscript.
Pregnancies complicated by preeclampsia (PE) are characterised by an enhanced maternal and fetal inflammatory response with increased numbers of leukocytes in maternal peripheral blood. The impact of PE on newborn umbilical cord blood cell (UCBC) populations however, has been scarcely studied. We hypothesise that PE deranges fetal haematopoiesis and subsequently UCBC populations. Therefore, the objective of this study was to investigate newborn umbilical cord blood cell populations in early- (EOPE) and late-onset PE (LOPE). A secondary cohort analysis in The Rotterdam Periconceptional Cohort was conducted comprising 23 PE cases, including 11 EOPE and 12 LOPE, and 195 controls, including 153 uncomplicated and 23 fetal growth restriction- and 19 preterm birth complicated controls. UCBC counts and differentials were quantified by flow cytometry and analysed as main outcome measures. Multivariable regression analysis revealed associations of EOPE with decreased leucocyte- (monocytes, neutrophils, eosinophils, immature granulocytes) and thrombocyte counts and increased NRBC counts (all p < 0.05). EOPE remained associated with neutrophil- (β-0.92, 95%CI -1.27,-0.57, p < 0.001) and NRBC counts (β1.11, 95%CI 0.27,1.95, p = 0.010) after adjustment for gestational age and birth weight. LOPE did not reveal any significant association. We conclude that derangements of fetal haematopoiesis, in particular of neutrophil- and NRBC counts, are associated with EOPE only, with a potential impact for future health of the offspring. This heterogeneity in UCBC should be considered as confounder in epigenetic association studies examining EOPE.
472
Mast cell activation test in the diagnosis of allergic disease and anaphylaxis
We developed a novel diagnostic tool, the MAT, in which primary hMCs generated from peripheral blood precursors from healthy donors were sensitized passively with patients' sera and then incubated with allergen in vitro, and MC activation was assessed.All study participants provided written informed consent.hMCs were generated, as previously described.21-23,Briefly, CD117+CD34+ cells were purified from buffy coat blood mononuclear cells by using a positive selection kit.Cells were cultured in serum-free StemSpan medium supplemented with 100 U/mL penicillin, 100 μg/mL streptomycin, human IL-6, human IL-3, human stem cell factor, and 10 μg/mL human low-density lipoprotein.After 30 days, the cells were transferred progressively to culture medium containing Iscove modified Dulbecco medium with GlutaMAX-I, 50 μmol/L β2-mercaptoethanol, 0.5% BSA, 1% Insulin-Transferrin-Selenium, 100 U/mL penicillin, 100 μg/mL streptomycin, human IL-6, and human stem cell factor.After 8 to 10 weeks of culture, the cells were tested for maturity and found to be greater than 90% CD117+ and FcεRIa+ cells."We used immunocytochemistry to characterize hMCs generated from peripheral blood precursors.Cultured primary hMCs were sensitized passively with serum samples from subjects with a physician-confirmed peanut allergy recruited from the Allergy Centre at the University Hospital of South Manchester.All patients had a convincing history of immediate reaction on exposure to peanut and detectable serum specific IgE to whole peanut extract.Control serum was collected from patients with pollen allergy but no history of peanut allergy who were consuming peanuts and had negative IgE and/or SPT results to whole peanut extract.To assess whether the MAT could be applied to nonfood allergens, we recruited 28 patients presenting with an acute episode of anaphylaxis to the emergency department of the University Hospital Golnik, Slovenia, caused by an insect sting; 21 patients had a confirmed systemic reaction and sIgE levels to wasp venom, and 7 patients had a confirmed systemic reaction and sIgE levels to honeybee but not wasp venom."hMCs were cultured in supplemented medium and sensitized passively by means of overnight incubation with the participants' sera.Cells were washed and treated with peanut extract at 0.01, 0.1, 1, 10, 100, and 1000 ng/mL protein or 10 nmol/L recombinant peanut allergens rAra h 1, rAra h 2, rAra h 3, rAra h 6, and rAra h 8 or left untreated."Allergen sources are described in detail in the Methods section in this article's Online Repository.As a positive control, sera-sensitized hMCs were incubated with goat anti-human IgE.After a 1-hour incubation, hMCs were stained with CD117, FcεRIa, CD63, and CD107a antibodies and analyzed by means of flow cytometry with the LSR II or Fortessa and FlowJo software.Intracellular tryptase levels were evaluated with an appropriate kit with anti-human tryptase and a secondary anti-mouse IgG.To ensure quality control across batches of hMCs, in each run we included a reference positive control and anti-IgE.Each batch was generated from 3 to 9 pooled donors to reduce the risk of specific donor dependence.After incubation with allergen, 50-μL aliquots from cell cultures were taken and centrifuged to separate the supernatant and cell pellet.Cell pellets were lysed in 50 μL of media culture 1% Triton X-100.β-Hexosaminidase levels were measured in supernatants, as well as in cell pellets, by adding 100 μL of β-hexosaminidase substrate and 1 mmol/L p-nitrophenyl N-acetyl-beta-D-glucosamine in 0.05 mol/L citrate buffer for 2 hours at 37°C in a 5% CO2 atmosphere.The reaction was stopped by adding 300 μL of 0.05 mol/L sodium carbonate buffer.OD was measured at 405 nm.hMC degranulation was assessed as percentage release of total β-hexosaminidase.Prostaglandin D2 levels were measured in supernatants by using the ELISA kit from Cayman Chemical."We recruited 42 peanut-sensitized subjects who underwent DBPCFCs to peanut.Patients who reacted on DBPCFC were considered to be allergic to peanut, whereas those who passed the challenge without experiencing dose-limiting symptoms were classified as sensitized but peanut tolerant.Blood samples were collected immediately before challenge and transferred without delay for assessment of basophil activation or centrifuged, and sera were stored at −80°C until analysis.Levels of total IgE, peanut-specific IgE, and IgE to the recombinant allergen components rAra h 1, 2, 3, 6, 8, and 9 were measured by using ImmunoCAP.SPTs were undertaken according to national guidelines by using lancets and commercial peanut extract, with 1% histamine as a positive control.BATs were performed, as described previously.24,In brief, heparinized whole blood from sensitized subjects was incubated with peanut allergen extract or anti-IgE in a 37°C water bath for 15 minutes.Cells were immunostained with anti-human CD3, CD303, CD294, CD203c, CD63, and CD107a.Erythrocytes from whole blood were lysed with BD lysing solution for 10 minutes at room temperature in the dark, samples were centrifuged, and supernatants were discarded.The resulting cell pellets were washed in 3 mL of PBS and resuspended in 450 μL of ice-cold fixative solution before acquisition on the BD FACSCanto II flow cytometer.Nonactivated and activated basophils were identified as CD203cdimCRTH2+ and CD203cbrightCD3−CD303−CRTH2+ cells, respectively.Additionally, activated cells were also identified as CD63+ and CD107a+CD3−CD303−CRTH2+ basophils.Analyses were performed with BD FACSDiva software.A 4-parameter logistic regression model was used to fit the dose-response curve and estimate the half-maximal effective concentration for each patient.Threshold sensitivity, the inverse of the half-maximal effective allergen concentration multiplied by 100 was then calculated, as described previously by Johansson et al.25 Higher CD-sens values indicate greater sensitivity.To best represent the MAT response as a single number, we calculated the area under the curve using the trapezoidal rule on logarithmically transformed venom concentrations, as previously described.26,Statistical analyses were performed with R software and its affiliated software packages.Data are represented as medians and interquartile ranges and were compared by using a Mann-Whitney U test.A 2-sided P value of less than .05 was considered statistically significant.Correlation coefficients were calculated by using the Spearman R test in Prism software.Intraclass correlation was calculated in R software to assess MAT and BAT reproducibility.We used ICC rather than coefficient of variation because the former is a more appropriate measure of interassay variation where there is no natural zero point.27,ROC curves and associated parameters were determined with Prism software.To identify the dominant modes of variation of the response patterns, we applied functional principal component analysis to the fitted curves.28,We then used k-means clustering to estimate distinct response patterns.To determine the optimal number of clusters, we used several evaluation measures available through the R package NbClust.29, "Further details of analyses can be found in the Methods section in this article's Online Repository. "After 8 to 10 weeks of culture, hMCs derived from peripheral blood precursors had the phenotypic and functional properties of mature hMCs: they expressed CD117+ and surface IgE receptors that bound strongly to serum IgE.We confirmed the presence of tryptase and chymase using immunofluorescence, with characteristic granularity patterns after staining with Giemsa and toluidine blue.We passively sensitized primary hMCs using sera from patients with peanut and pollen allergy.To assess their degranulation after stimulation with peanut and grass allergen extract, we measured surface expression of CD63 and CD107a using flow cytometry30 and release of β-hexosaminidase and PGD2."In vitro incubation with allergen resulted in a dose-dependent increase in CD63 and CD107a membrane expression and β-hexosaminidase release; all immunologic readouts correlated significantly.Functional degranulation was further confirmed by the observation that allergen stimulation caused allergen-specific release of PGD2.Incubation with 10 μg/mL anti-IgE resulted in a similar degree of degranulation.The hMC response was allergen specific, and there was no evidence of hMC activation or degranulation when we used sera from patients sensitized to allergens other than that used for stimulation."Stimulation with anti-IgE resulted in greater surface expression of CD63 and CD107a and higher levels of β-hexosaminidase and PGD2 release in hMCs compared with LAD2 cells.In summary, hMCs passively sensitized with sera from donors with peanut and/or pollen allergy were very sensitive to low doses of allergen.The sensitized hMCs demonstrated allergen-specific and dose-dependent degranulation by using both expression of surface activation markers and functional assays, indicating that hMCs are suitable as primary effector cells for screening studies.Given the correlation between immunologic parameters, we used CD63 expression as the readout in subsequent experiments.Primary hMCs were sensitized passively with sera from 14 patients with peanut allergy and 4 atopic control subjects without peanut allergy."All 14 patients with peanut allergy had a recent history of peanut-induced anaphylaxis.Incubation of passively sensitized hMCs with increasing concentrations of peanut extract resulted in a dose-dependent expression of CD63 in patients with peanut allergy but not in atopic control subjects.Anti-IgE induced a similar degree of CD63 expression.There was a significant correlation between the level of hMC degranulation induced at 0.1 ng/mL peanut extract and the peanut-specific IgE titer.The CD-sens of the MAT25 showed a weaker correlation with peanut-specific IgE levels, with the patient population appearing to separate into 2 groups.Stimulation of hMCs with the recombinant peanut proteins rAra h 1, rAra h 2, rAra h 3, and rAra h 6 also increased CD63 expression.The Bet v 1 homologue rAra h 8 did not induce substantial hMC degranulation in these patients.Fig 3, C, shows the correlation between IgE titers to allergen components and hMC degranulation."hMCs were sensitized by using sera from 21 patients with a confirmed systemic reaction to wasp venom and 7 patients with previous systemic reaction to honeybee but not wasp venom. "Incubation of passively sensitized hMCs with increasing concentrations of wasp venom extract resulted in a dose-dependent expression of CD63 in patients with wasp venom allergy but not those with honeybee venom allergy.We performed MATs in a further cohort of 42 peanut-sensitized patients before they underwent DBPCFCs to peanut.Demographic and clinical characteristics of the study population are shown in Table I; 30 participants reacted to DBPCFCs and were classified as having peanut allergy, whereas 12 passed the challenge without experiencing dose-limiting symptoms and were categorized as sensitized but peanut tolerant."Individual MAT dose-response curves are shown in Fig E5, A, in this article's Online Repository at www.jacionline.org.By using ROC curve analysis, the MAT outcome measures with the best performance to discriminate patients with peanut allergy from peanut-tolerant patients were MAT response to crude peanut at 10 and 100 ng/mL concentrations and AUC for the dose-response curve.Therefore we used MAT-AUC as the outcome measure for further analyses.All 42 patients in the validation cohort underwent conventional allergy testing, as well component testing and BATs.We assessed the performance characteristics for each test by using DBPCFCs as the reference.The study team was blinded to the results of the diagnostic tests at the time of challenge to prevent bias.By using ROC curve analysis, MATs had the most favorable discrimination performance compared with the other diagnostic tests."We undertook a further analysis in a subgroup of 24 peanut-sensitized patients with equivocal conventional testing,31 12 of whom had a positive DBPCFC result.In this subgroup of patients, MATs continued to trend toward superior discrimination performance compared with other diagnostic tests.MAT responses in the validation cohort were assessed on at least 2 separate occasions in 25 patients with peanut allergy who also underwent BATs.We calculated ICC.We used ICC rather than coefficient of variation because the former is a more appropriate measure of interassay variation where there is no natural zero point.27,Overall, the ICC for MATs was 0.96.In contrast, the ICC for BATs performed on the same patients on 2 separate occasions up to 4 weeks apart was 0.43.We used data-driven analyses to identify whether there were subgroups of patients with similar MAT responses among the total cohort of 42 peanut-sensitized patients.28,Fig 5, A, shows the MAT dose response for each patient with individually fitted smooth response curves.A large variation between individual curves was observed, particularly at high allergen concentrations."The FDA28 indicated the presence of distinct groups characterized by different velocity and acceleration.We identified the dominant modes of variation of response patterns using FPC analysis.28, "The first FPC explained 92% of the variation, and the second FPC explained 7%.The first FPC represented the overall level of allergic response, with larger effects registered for increasing allergen concentration and with low, moderate, and high responses evident.The second FPC reflected response changes, which we interpreted as the sensitivity to the specific allergen concentration.In general, patients had a steadily increasing response across allergen concentrations.Variations revealed a group with high sensitivity to lower doses and a group requiring higher doses to induce a response.To further investigate the structure of the data, we performed k-means clustering.32, "Results indicated a well-defined partition of the response patterns, with 5-cluster solutions being optimal. "The clusters differed significantly from one another.Cluster 1 was characterized by no response or low response, and velocity and acceleration were stable throughout the entire range of concentrations; this group included peanut-tolerant patients."In contrast, patients in cluster 5 had a high response to low doses and quickly reached a response peak, which then became constant for the remaining concentrations. "Clusters 2 to 4 were characterized by distinct levels of sensitivity. "When we related the clusters to clinically defined severity of peanut allergy ascertained by DBPCFCs, cluster 1 corresponded to sensitized patients who were either nonreactive or experienced symptoms only at relatively high levels of exposure, although the higher clusters corresponded to patients who reacted to far lower levels of exposure with a tendency toward more significant systemic reactions. "To compare the discriminatory power of sIgE measurement in comparison with the MAT, we also undertook k-means clustering on sIgE data.The optimal number of clusters was 2, with the clustering distinguishing between patients with low versus those with high sIgE levels.When we compared the 2 partitions by plotting patients on the space defined by variable sIgE levels and the first FPC, the response for MAT clusters 4 and 5 appeared to be independent of sIgE levels.We developed a robust and reproducible MC-based assay to improve the diagnosis of IgE-mediated allergy using human MCs derived from human progenitor cells.hMCs sensitized with sera from patients with peanut, grass pollen, and Hymenoptera allergy demonstrated allergen-specific and dose-dependent degranulation by using both expression of surface activation markers and functional assays.The MAT is a very sensitive assay, with significant levels of surface expression of CD63 activation markers after stimulation with peanut at concentrations up to 2-log lower than that required for the BAT.15,We have shown that in the cohort of peanut-sensitized patients who underwent DBPCFCs to peanut, our novel MAT appeared to confer superior diagnostic accuracy compared with existing diagnostics in distinguishing between patients with clinical reactivity and those who did not react to DBPCFCs.Our data imply that the MAT response is not just dependent on serum specific IgE levels.When we compared the partitions obtained by using k-means clustering on sIgE data with that relating to MAT response, the latter appeared to be independent of sIgE.This is consistent with our observation of 2 separate groups of patients when comparing the MAT readout: one group had a higher MAT sensitivity to the same level of sIgE.Thus the MAT response does not appear to depend exclusively on the concentration of sIgE levels, suggesting that hMC degranulation can be regulated by additional elements, such as affinity or a combination of allergen IgE specificities that vary between subjects.In addition to diagnosis, other potential applications of the MAT could include investigations of the intracellular communication pathways and molecular mechanisms engaged in the IgE-mediated activation of these cells to allergen, the assessment of MAT responses to different allergen epitopes, and, given the very high sensitivity of MAT, assessment for the unintended presence of food allergens during food production, The MAT might therefore be useful as an aid to associated risk allergen management.We found hMC cultures to be stable, reproducible, and highly sensitive; these characteristics mean they are ideal tools to investigate the unique effector functions of human MCs.Our protocol used culture media free of serum, thus reducing the risk of nonspecific patient-protein interactions.Some groups have attempted to standardize diagnostic methods by using cell lines that express FcεRI, the high-affinity IgE receptor.33-35,We found that stimulation of primary hMCs with anti-IgE resulted in more degranulation than that seen with LAD2 cells under the same conditions, suggesting that hMCs might be more suitable than LAD2 cells in FcεRI-mediated degranulation studies.Indeed, LAD2 cells, being of tumor origin, are slow growing in culture36 and unstable in that they eventually lose their capacity to undergo FcεRI-mediated degranulation,37 a key characteristic of MCs.Therefore LAD2 cells might not be representative of a typical hMC phenotype.We also believe that hMCs are superior to rodent RBL-2H3 cells stably transfected with human FcεRI for diagnostic purposes.The latter is a humanized cell line, which in itself may be a shortcoming, and also shows variability in their IgE-binding capacity.38,A direct comparison between RBL-2H3 cells and primary human basophils showed no response from RBL-2H3 cells after sensitization with sera from a patient with chronic urticaria, despite primary basophils showing a strong response under the same conditions, further underlining the drawbacks of using this cell line for allergy testing.39,The stability, reproducibility, and higher sensitivity of hMCs recommend them as ideal tools to investigate the unique effector functions of human MCs and the intracellular molecular mechanisms and signaling pathways that distinguish human MCs from basophils in allergen reactivity.A number of groups have sought to assess and validate the BAT for the diagnosis of peanut allergy.15-17,There are similarities in methodology between BATs and MATs, with both techniques using flow cytometry to assess the expression of surface activation markers after incubation with allergen in vitro.However, the BAT requires fresh blood, which is ideally processed within 4 hours of collection.14,Some groups have sought to perform BATs up to 24 hours after sampling, which results in downregulation of surface activation marker expression40; to date, the effect of this downregulation on diagnostic accuracy has not been assessed.The requirement to analyze fresh blood samples affects the feasibility of the BAT and has limited its use to a few specialist centers.11,Moreover, basophils from 6% to 17% of the population do not respond to IgE under standard BAT conditions, and BATs cannot be used for these subjects.30,In contrast, the MAT uses serum samples, which can be frozen and batch tested in a central facility.Although the differentiation of hMCs from blood progenitors requires time and specific expertise, this could take place in specialist centers, with the possibility of supplying hMCs to external laboratories.Our preliminary data, showing improved diagnostic performance of the MAT compared with other techniques, are convincing arguments to pursue further development of the MAT for clinical testing.To ensure quality control across batches of hMCs, in each testing run we included an internal control and positive control.Each batch was generated from 3 to 9 pooled donors, which reduces the risk of specific donor dependence and increases reproducibility.We confirmed this by assessing the MAT response on at least 2 separate occasions and assessing ICC, which was high.In contrast, the ICC for the BAT was much lower, an observation that likely represents the inherent biological variability in basophil reactivity from one day to another.One advantage of the BAT is that it evaluates both effector cell reactivity and serum factors.35,However, in the context of food allergy, it is unclear as to the relative contributions to circulating basophils versus tissue-resident MCs.18,The observation that the MAT has better discrimination performance than the BAT implies that clinical reactivity/tolerance might depend more on serum factors as opposed to basophil reactivity.The peanut-sensitized patients used for the initial validation of the MAT are not representative of a general clinic population with indeterminate diagnostics, who might otherwise be selected to undergo a formal food challenge to clarify a diagnosis.We attempted to correct for this by including a subanalysis including only patients with indeterminate standard diagnostic tests.15,The results in this group of patients indicated that the MAT can confer improved diagnostic accuracy over existing allergy tests.However, further evaluation in a more representative clinic cohort is needed to confirm these findings.Our exploratory data-driven analysis of MAT responses in patients with peanut allergy suggested that the patterns of response in the MAT can provide information relating to clinical reactivity, identifying the patients most at risk of significant anaphylaxis.If proved correct, this would be clinically very useful because no other diagnostic tests can predict the severity of the reaction on exposure.41,However, in this context our study generated a hypothesis that will require further studies to verify it.In conclusion, we developed an MC-based assay to improve the diagnosis of IgE-mediated allergy that was robust and reproducible.Compared with other commonly used diagnostic tests, our novel MAT appeared to confer superior diagnostic accuracy in distinguishing between patients with true peanut allergy and those who are sensitized but tolerant to peanut.We developed a robust and reproducible novel MC-based assay.Compared with existing diagnostic tests, our novel MAT appeared to confer superior diagnostic accuracy in distinguishing between peanut-sensitized patients with and without clinical reactivity.
Background: Food allergy is an increasing public health issue and the most common cause of life-threatening anaphylactic reactions. Conventional allergy tests assess for the presence of allergen-specific IgE, significantly overestimating the rate of true clinical allergy and resulting in overdiagnosis and adverse effect on health-related quality of life. Objective: To undertake initial validation and assessment of a novel diagnostic tool, we used the mast cell activation test (MAT). Methods: Primary human blood-derived mast cells (MCs) were generated from peripheral blood precursors, sensitized with patients’ sera, and then incubated with allergen. MC degranulation was assessed by means of flow cytometry and mediator release. We compared the diagnostic performance of MATs with that of existing diagnostic tools to assess in a cohort of peanut-sensitized subjects undergoing double-blind, placebo-controlled challenge. Results: Human blood-derived MCs sensitized with sera from patients with peanut, grass pollen, and Hymenoptera (wasp venom) allergy demonstrated allergen-specific and dose-dependent degranulation, as determined based on both expression of surface activation markers (CD63 and CD107a) and functional assays (prostaglandin D2 and β-hexosaminidase release). In this cohort of peanut-sensitized subjects, the MAT was found to have superior discrimination performance compared with other testing modalities, including component-resolved diagnostics and basophil activation tests. Using functional principle component analysis, we identified 5 clusters or patterns of reactivity in the resulting dose-response curves, which at preliminary analysis corresponded to the reaction phenotypes seen at challenge. Conclusion: The MAT is a robust tool that can confer superior diagnostic performance compared with existing allergy diagnostics and might be useful to explore differences in effector cell function between basophils and MCs during allergic reactions.
473
Genetic and pharmacological evidence that G2019S LRRK2 confers a hyperkinetic phenotype, resistant to motor decline associated with aging
Mutations in the leucine-rich repeat kinase 2 gene are associated to late-onset, autosomal dominant Parkinson's disease, and account for up to 13% of familial and 1–2% of sporadic PD cases.LRRK2-associated PD is clinically similar to idiopathic forms, and is characterized by the degeneration of substantia nigra dopaminergic neurons usually with α-synuclein and ubiquitin positive Lewy body formation.Furthermore, variations in LRRK2 have been linked to other diseases, leprosy, cancer and possibly inflammatory bowel disease although the latter is controversial.LRRK2 is a large multifunctional protein, essentially consisting of a GTPase/ROC along with its COR domain, a kinase domain, and a number of protein-protein interaction domains including ankyrin and leucine-rich repeat motifs at the N-terminus, and WD40 repeats at the C-terminus.The pathogenic mutations of LRRK2 are clustered among the central tridomain region that forms the catalytic core of the protein.The substitution of a glutamate with a serine in position 2019 is the most common familial mutation, and has attracted greater interest because it enhances LRRK2 kinase activity in vitro and in vivo, resulting in neuronal toxicity in vitro.Interestingly, non-selective LRRK2 inhibitors were shown to protect against G2019S LRRK2-induced neurodegeneration in vivo, indicating that inhibition of LRRK2 activity may represent a valuable target in a PD therapeutic perspective.Accordingly, these findings have provided the rationale for developing selective LRRK2 kinase inhibitors for their potential antiparkinsonian activity.Quite disappointingly, however, the attempts to reproduce parkinsonian-like motor deficits in rodents expressing G2019S LRRK2 have led to inconsistent results, and, as a consequence, a reliable rodent model for testing motor effects of LRRK2 inhibitors in vivo is currently unavailable.Indeed, mice overexpressing human or murine G2019S using bacterial artificial chromosome transgenesis did not show any impairment of motor performance, and instead were found hyperactive in some tests.Consistently, mice overexpressing human G2019S LRRK2 under the Thy1, CaMKII or CMV/PDGF artificial promoters showed, if any, improvements in motor activity.Finally, rats temporarily overexpressing G2019S, show increased exploratory behavior in the open field at 20 months but not at earlier ages.Although it is possible that the degree of G2019S transgene overexpression in midbrain dopamine neurons, which is promoter-dependent, drives the motor phenotype, the data so far accumulated in rodents overexpressing G2019S LRRK2 suggest, at most, that low expression levels of G2019S are not detrimental for motor function.Actually, the consistent observations of test-dependent, mild improvements of motor activity across these studies call for a more in-depth analysis of the impact of G2019S LRRK2 on motor function, using a longitudinal phenotyping strategy and behavioral tests more specific for motor function.In fact, most studies are limited to the use of the open field test, where motor performance can be influenced by affective states.In addition, studies in G2019S overexpressing animals may be criticized for artificially enhancing LRRK2 levels in areas where LRRK2 physiological expression is low, and for overlooking the interference between LRRK2 mutants and native endogenous LRRK2, still expressed.For these reasons, in the present study we enrolled two cohorts of G2019S knock-in mice and wild-type littermates, and analyzed their motor activity from the age of 3 to 19 months, using a set of complementary behavioral tests, specific for akinesia, bradykinesia and overall gait ability.Our study revealed that G2019S KI mice had enhanced motor activity compared to WT already at 3 months of age, and throughout aging.To confirm that enhanced kinase activity accounts for this phenotype, we performed a parallel longitudinal study in mice carrying a LRRK2 mutation that impairs kinase activity, in comparison with their own WT.In addition, we tested the ability of small molecular-weight ATP analogous LRRK2 kinase inhibitors to reverse the hyperkinetic phenotype of G2019S KI mice.In vivo LRRK2 targeting of kinase inhibitors was confirmed by measuring LRRK2 phosphorylation at Ser935.Previous studies have attempted to replicate a parkinsonian-like phenotype in rodents by overexpressing pathogenic G2019S LRRK2 mutation.These studies differ in many ways, such as the technology used, the levels of transgene expression and its neuronal localization, the mouse strain and, not last, the motor tests used.Nonetheless, these studies failed in showing a detrimental effect of G2019S on motor functions, unless high levels of transgene expression are artificially attained in substantia nigra neurons via the CMV/PDGF promoter, leading to 30–50% neuronal loss.Consistent with this view, no motor change was observed in another study on these mice, where a lower level of transgene expression in DA neurons and, consequently, a lower degree of substantia nigra neurodegeneration was achieved.To extend previous studies, here we provide the results of the first longitudinal phenotyping study in G2019S KI mice, showing that expression of mouse LRRK2 gene carrying the G2019S mutation confers a hyperkinetic phenotype, that is resistant to age-related motor decline.The robust and long lasting hyperkinetic phenotype described was substantially confirmed by a transversal motor analysis of different age-matched cohorts of G2019S KI and WT mice.Two lines of evidence seem to confirm that enhancement of kinase activity, which is a consequence of the G2019S mutation, increases motor performance: i) the motor function of mice carrying a kinase-silencing mutation showed normal age-related motor worsening superimposable to that of WT littermates and ii) ATP-competitive kinase inhibitors reversed the hyperkinetic phenotype selectively in G2019S KI mice.Further evidence that enhancement of LRRK2 kinase activity might be responsible for the observed motor phenotype in G2019S KI mice comes from KI mice carrying the R1441C mutation in the ROC domain.In fact, the R1441C mutation induces a milder increase of kinase activity with respect to the G2019S mutation, or no increase at all, and motor activity of R1441C KI mice in the open field and rotarod appears to be unchanged up to 24 months of age.The motor phenotype described in the present study clearly differs from that reported in G2019S overexpressing mice, although transient and test-related motor facilitation has been observed in those mice.For instance, BAC mice overexpressing human G2019S LRRK2 showed faster walking speed in the open field whereas human G2019S overexpressors under the Thy1 or the TetO CaMKII promoters showed transient improvement in rotarod performance or increased exploratory behavior.In addition, rats temporarily, but not constitutively, overexpressing human G2019S LRRK2 showed increased exploratory behavior in the open field at 18 months.The influence of G2019S LRRK2 on motor function was evident in tests specific for akinesia/bradykinesia, and spontaneous exploratory behavior, but not in a test for exercise-driven motor activity.This might suggest an influence of G2019S LRRK2 on specific motor parameters.Indeed, stepping activity involves striatal sensory–motor function whereas the rotarod test integrates both motor and non-motor functions, and likely involves multiple brain areas, where LRRK2 is expressed to a different degree.However, both in the bar and drag test, the main effect of G2019S was to preserve motor function from aging, suggesting that, more in general, G2019S mutation might confer a phenotype which is more resistant to the age-related motor decline.The possibility that this hyperkinetic phenotype is associated with changes of neurotransmitter release should be considered.Indeed, both LRRK2 silencing or expression of the G2019S mutation has been reported to facilitate exocytosis, consistent with the finding that too much or too little kinase activity has a negative impact on vesicle trafficking.Consistently, pharmacological inhibition of LRRK2 activity impairs vesicle endocytosis and neurotransmitter release.Moreover, enhancement of the stimulus-induced DA release has been detected in PC12 cells expressing the G2019S mutation.Therefore, we might hypothesize that the hyperkinetic phenotype of G2019S KI mice is due to increased DA concentration at the synaptic cleft.Alternatively, an increased motor activity could result, even in the absence of an elevation of DA levels, from an increased expression of postsynaptic D1 receptors, as suggested by study in G2019S expressing cells.The motor phenotype of G2019S KI mice seems to be achieved through a gain-of-function process.Indeed, silencing kinase activity does not affect motor function.The lack of endogenous control over motor activity by endogenous LRRK2 is further supported by the absence of a clear motor phenotype in LRRK2 knockout mice, although exploratory changes consistent with anxiety-like behavior, and facilitation of rotarod performance have also been reported in these mice.While our results disclose robust motor alterations in G2019S KI mice, KI mice expressing other pathological LRRK2 mutations as well as G2019S KI mice from other laboratories need to be assessed in these motor tests to strengthen the involvement of LRRK2 kinase activity in this paradigm.Interestingly, mice carrying PD-linked mutations, such as α-synuclein overexpressed under the Thy1 promoter, or parkin and DJ-1 double knock-out mice were found to be hyperactive and have increased striatal DA levels in their pre-symptomatic phase, possibly indicating compensatory mechanisms preceding nigro-striatal DA system demise.Whether the hyperactive motor phenotype of G2019S KI mice might be considered as a result of pre-symptomatic compensatory changes is presently under investigation.To possibly support this view, asymptomatic human G2019S carriers have higher putaminal DA turnover rate.The fact that we did not observe a reversal of motor hyperactivity into frank hypokinesia, as in α-synuclein overexpressors, up to 19 months, might indicate a longer and slower pre-symptomatic phase in G2019S KI mice, in line with the different ages at onset of the disease, i.e. juvenile for α-synuclein related PD, and late for G2019S related PD.It will be interesting to investigate in the future the molecular basis underlying LRRK2 G2019S-related hyperactive motor performance.Indeed, the requirement of kinase activity in LRRK2 pathogenic effects is still unclear.For instance, there is also growing evidence that the LRRK2 levels are driving the neuropathology rather than the kinase activity.Nevertheless, the role of LRRK2 on motor function and on neurotoxicity can be two independent mechanisms.Another important finding of the present study is the demonstration that ATP-competitive LRRK2 kinase inhibitors reversed the motor phenotype in G2019S KI mice.Both H-1152 and Nov-LRRK2-11 were able to inhibit LRRK2 kinase activity and reduce LRRK2 phosphorylation at Ser935 in NIH3T3 cells.This confirms previous findings that LRRK2 kinase inhibition with H-1152 abolishes binding to 14-3-3 proteins, resulting in de-phosphorylation of LRRK2 at Ser910 and Ser935.Based on these data, phosphorylation at Ser935 has been proposed as a readout of LRRK2 kinase activity although this phosphorylation is not the direct consequence of autophosphorylation but is possibly controlled by a LRRK2-activated kinase/phosphatase and, therefore, does not always correlate with kinase activity.In our hands, Nov-LRRK2-11 reversed the phenotype of G2019S KI mice and inhibited LRRK2 phosphorylation at 10 mg/kg, confirming ex vivo pulldown experiments showing that Nov-LRRK2-11 penetrates into the brain.Behavioral data were in agreement with PK data after 3 mg/kg oral dose, and with the apparent terminal half-life of Nov-LRRK2-11 in blood after i.v. dosing of 1 mg/kg.Interestingly, the same behavioral effect was observed at ten-fold lower doses of H-1152, suggesting significant brain penetration also for H-1152, for which, however, no published pharmacokinetic are available.Further validation of this motor reversal phenotype with additional compounds inhibiting LRRK2 is of course desirable and necessary.The present study shows a clear dissociation between motor changes and in vivo LRRK2 phosphorylation.Indeed, Nov-LRRK2-11 inhibited motor activity in G2019S KI mice causing only minimal effects in WT mice, in face of a marked inhibition of LRRK2 phosphorylation in the striatum and cerebral cortex of both genotypes.Moreover, although H-1152 consistently inhibited motor activity and LRRK2 phosphorylation in G2019S KI but not WT mice, its motor effects far exceeded those on LRRK2 phosphorylation.The most parsimonious way to explain this discrepancy is that LRRK2 de-phosphorylation at Ser935 is a marker for in vivo target engagement of LRRK2 kinase inhibitors, but does not follow their motor effects.Perhaps, other phosphorylation, or autophosphorylation, site on LRRK2 should be monitored.In fact, we should recall that both inhibitors target other kinases beyond LRRK2, which may act upstream and downstream of LRRK2.For instance, PKA has been shown to crosstalk with LRRK2, whereas ROCK/MLCK are involved in actin cytoskeleton remodeling pathways, in which activity they could crosstalk with LRRK2.On the other hand, given the parallel inhibition of motor activity and striatal LRRK2 phosphorylation induced by H-1152, we might speculate that striatal LRRK2 is a key regulator of motor activity.In this case, if higher LRRK2 kinase activity is present in the striatum of G2019S KI mice compared to WT mice, as predicted, normalization of these levels by LRRK2 kinase inhibitors might represent the trigger of a cascade of events leading to sustained motor inhibition.Moreover, the different patterns of LRRK2 de-phosphorylation of Nov-LRRK2-11 and H-1152 in G2019S KI and WT mice suggest a higher LRRK2 selectivity and/or brain exposure/free fraction of Nov-LRRK2-11.For further confirmation, additional information on earlier times-points of Nov-LRRK2-11 effects as well as higher doses of H-1152 would be desirable.In addition, the finding that Nov-LRRK2-11 reduces LRRK2 levels in the cortex but not striatum indicates different properties of the LRRK2 system in these two areas.Male homozygous LRRK2 G2019S KI and KD mice backcrossed on a C57Bl/6J background were obtained from Novartis Institutes for BioMedical Research, Novartis Pharma AG.Male non-transgenic wild-type mice were littermates obtained from the respective heterozygous breeding.The mice used in our study were generated at Novartis laboratories, and were previously characterized from several biochemical and neuropathological standpoints, although motor analysis was limited to the open field in 5-month old animals.Mice employed in the study were kept under regular lighting conditions and given food and water ad libitum.Experimental procedures involving the use of animals were approved by the Ethical Committee of the University of Ferrara and the Italian Ministry of Health.Adequate measures were taken to minimize animal pain and discomfort.The longitudinal study was conducted on two cohorts of G2019S KI and their WT littermates.Mice were received from Novartis at the age of about 2 months, and accommodated in the vivarium of the University of Ferrara.Mice were subjected to motor tests at 3, 6, 10, 15 and 19 months.D1994S KD and their WT mice were tested at 3, 6, 10 and 15 months.Separate, age-matched cohorts of 3, 10, 14 and 18-month-old WT and G2019S KI mice were enrolled in transversal behavioral studies, to parallel the results of the longitudinal study.Finally, other cohorts of mice were used for pharmacological studies with LRRK2 kinase inhibitors at 6, 12 and 15 months, or 12 months.Motor activity was evaluated by means of three behavioral tests specific for different motor abilities, as previously described: the bar, drag and rotarod test.Animals were trained for 4 days to the specific motor tasks in order to obtain a reproducible motor response, and then tested at the 5th day, both in the phenotyping and pharmacological studies.For pharmacological studies, the test sequence was repeated before and at different time-points after drug injection.Experimenters were blinded to genotype and treatments.Originally developed to quantify morphine-induced catalepsy, this test measures the ability of the animal to respond to an externally imposed static posture.Also known as the catalepsy test), it can also be used to quantify akinesia also under conditions that are not characterized by increased muscle tone as in the cataleptic/catatonic state.Mice were gently placed on a table and forepaws were placed alternatively on blocks of increasing heights.The time that each paw spent on the block was recorded.Performance was expressed as total time spent on the different blocks.Modification of the ‘wheelbarrow test’, this test measures the ability of the animal to balance its body posture with the forelimbs in response to an externally imposed dynamic stimulus.It gives information regarding the time to initiate and execute a movement.Animals were gently lifted from the tail leaving the forepaws on the table, and then dragged backwards at a constant speed for a fixed distance.The number of steps made by each paw was recorded.Five to seven determinations were collected for each animal.The fixed-speed rotarod test measures different motor parameters such as motor coordination, gait ability, balance, muscle tone and motivation to run.Mice were tested over a wide range of increasing speeds on a rotating rod and the total time spent on the rod was recorded.The open field test was used to measure spontaneous locomotor activity in 15-month-old mice.The ANY-maze video tracking system was used as previously described.Briefly, mice were placed in a square plastic cage, one mouse per cage, and ambulatory behavior was monitored for 60 min with a camera.Four mice were monitored simultaneously each experiment.Total distance traveled and immobility time were recorded.Twelve-month-old mice were administrated i.p. with the LRRK2 kinase inhibitor H-1152 at two different dose levels, or with the LRRK2 kinase inhibitor Nov-LRRK2-11, at two different dose levels for the indicated time.H-1152 was dissolved in 0.9% saline solution whereas Nov-LRRK2-11 in 3% DMSO/3% Tween 80."NIH3T3 cells were cultured in Dulbecco's Modified Eagle's medium supplemented with 10% fetal bovine serum, penicillin and streptomycin and maintained at 37 °C in a 5% CO2 controlled atmosphere.H-1152 and Nov-LRRK2-11 were dissolved in 0.9% saline solution and in 3% DMSO/3% Tween 80/0.9% saline, respectively.Inhibitors were used at the indicated concentrations, and equivalent volumes of saline solution were used as control.Inhibitors were added to the culture medium for 90 min before cell lysis.NIH3T3 cells, as well as striatum and cortex obtained from brain dissection were homogenized and solubilized in lysis buffer supplemented with 1% Triton X-100 and protease inhibitor cocktail, then cleared at 14,000 g at 4 °C for 30 min."Protein concentrations were determined using the bicinchoninic acid assay as manufacturer's instructions.Proteins were separated by electrophoresis into pre-casted 4–20% SDS-PAGE gels and subsequently transferred onto Immobilon-P membrane.Membranes were first incubated 1 h at RT with rabbit anti-LRRK2 phospho Ser935, rabbit anti-LRRK2 UDD3 and mouse anti-GADPH, then with HRP-conjugated secondary antibodies for 1 h at room temperature and then incubated with enhanced chemiluminescent western blot substrate.In order to gain insights on the brain penetration of Nov-LRRK2-11, a screening cassette approach was used, as previously described.Adult male C57Bl/6 mice were orally administered with Nov-LRRK2-11 at a dose of 3 mg/kg p.o.Volume of oral administration was 10 mL/kg body weight.After drug cassette administration, blood was collected at different time points either by puncture of the sublingual vein or by puncture from the vena cava at sacrifice.Moreover, at sacrifice, brains were removed, weighted and immediately frozen on dry ice.Blood and brain samples were stored at − 20 °C until analysis.Samples were analyzed for Nov-LRRK2-11 content with LC–MS/MS methodologies.Data are expressed as absolute values and are mean ± SEM of n mice.To assess the significance of behavioral changes over the 19-month longitudinal study a linear mixed-model repeated measures analysis using the REPEATED statement was used, followed by the Bonferroni test.Genotype was set as discrete variable, weight as continuous variable.This allowed to verify whether changes in weight could account for changes in behavioral performances.Statistical analysis of drug effect was performed by one-way repeated measure analysis of variance followed by the Newman–Keuls test for multiple comparisons, or by two-way ANOVA followed by Bonferroni test for multiple comparisons."Instead, two groups of data were compared with Student's t-test, two tailed for unpaired data.p-values < 0.05 were considered to be statistically significant.To investigate whether the kinase-enhancing G2019S point-mutation in murine LRRK2 affects motor performance, two cohorts of G2019S KI mice and age-matched WT littermates were enrolled in a longitudinal study in which motor activity was measured using the bar, drag and rotarod tests from 3 through 19 months of age.G2019S KI mice had throughout the study a lower body weight than WT.The difference was 14% on average.In the bar test, a significant effect of genotype, time and their interaction was found.The influence of weight was found not to be significant.Immobility time of G2019S KI mice in the bar test was not different from that of WT at 3 months.Immobility time increased along with aging in WT mice reaching a maximum of 33.7 ± 2.3 s at 19 months.Conversely, G2019S KI mice did not become akinetic with aging, showing similar performances across the study.The difference between genotypes was evident starting at 6 months, and attained stable values from 10 months onward.In the drag test, a significant effect of genotype, time and their interaction was found.As in the bar test, no significant influence of weight was observed.G2019S KI mice showed a significant 23% greater stepping activity than WT at 3 months.This difference became 2–3-fold larger in older animals, since stepping activity of WT mice progressively worsened over time, reaching 3.4 ± 0.3 steps at 19 months, whereas that of G2019S KI mice remained stable up to 15 months, showing a significant decline only at 19 months.Different from the bar and drag test, no significant effect of genotype was found in the rotarod test, but a significant effect of time and genotype × time interaction.Also in this test, the influence of weight was not significant.Mild improvement of rotarod performance in WT mice was observed along with aging, whereas that of G2019S KI mice remained stable throughout the study.The open field test was performed in 15-month-old animals.G2019S KI mice showed 42% shorter immobility time and 43% longer distance traveled compared to WT.To confirm the hyperkinetic phenotype of G2019S KI mice, the bar, drag, rotarod and open field tests were repeated in age-matched separate cohorts of 3, 10, 14 and 18-month-old mice, not involved in the longitudinal study.These experiments substantially confirmed that G2019S KI mice were hyperactive, with significant differences with WT emerging already at 3 months in the bar and drag tests, and at 10 months in the open field.As in the longitudinal study, no differences in rotarod performance were observed between age-matched cohorts of WT and G2019S KI mice.Since experiments in G2019S KI mice suggested that enhancement of kinase activity is associated with greater motor performance, we investigated if kinase activity silencing mutation might cause a differential effect.To this purpose, we used mice bearing the kinase-inactivating point mutation D1994S and age-matched WT littermates.No difference in weight was observed between D1994S KD and WT mice throughout the study.Statistical analysis of bar test values revealed no significant effect of genotype, a significant effect of time but not a genotype × time interaction.Likewise, in the drag test, a significant effect of time, but not of genotype or genotype × time interaction was found.Only in the rotarod test, a significant effect of genotype and time but not their interaction was found.Overall, basal activity in the bar, drag and rotarod test was similar between D1994S KD mice and their WT at any age analyzed.As expected, WT but as well D1994S KD mice showed a significant worsening of motor activity in the bar and drag tests at 10 and 15 months.Transient improvement of rotarod performance was observed in WT and D1994S KD mice at 10 months.Consistently, no difference in exploratory behavior was observed between genotypes in the open field at 15 months Figs. 2D–E).WT mice obtained from both colonies showed substantially similar performances throughout the study, with the exception of rotarod performance which was significantly lower at 3 and 6 months in WT littermates of G2019S KI mice.Since results obtained with G2019S KI and D1994S KD mice suggest that the greater motor performance associated with the G2019S mutation is dependent on kinase activity, we next asked whether LRRK2 kinase inhibitors acutely administered to G2019S KI mice were effective at returning the hyperkinetic phenotypes to WT levels.We first used H-1152, a ROCK inhibitor which has been previously shown to display high potency against LRRK2.We initially confirmed that H-1152 was effective at inhibiting endogenous LRRK2 in NIH3T3 mouse fibroblasts, using de-phosphorylation of Ser935 as readout of LRRK2 kinase activity, as previously described.As shown in Fig. 3, H-1152 induced Ser935 de-phosphorylation in a concentration-dependent manner, with apparent IC50 of 170 nM.We next assessed H-1152 in vivo.In 6-month old mice, H-1152 was ineffective at 0.1 mg/kg, but increased immobility time and reduced stepping activity of G2019S KI mice to the levels of WT mice at 1 mg/kg.The same dose of H-1152 did not affect rotarod performance.The time-course of the response to H-1152 was next studied in 12-month old mice.Saline-treated WT and G2019S KI mice showed stable responses in the bar and drag tests across the 24-h observation period.Administration of H-1152 induced a rapid and prolonged increase of immobility time and reduction of stepping activity in G2019S KI mice, being ineffective in WT mice.No residual effect of H-1152 was detected 24 h after administration.Rotarod performance was not significantly affected by H-1152.Indeed, performances worsened within 75 min after administration and remained stable afterwards in mice of both genotypes treated with saline or H-1152.Motor tests were repeated in 15-month old mice with substantially similar results, although H-1152 also mildly and transiently impaired rotarod performance in G2019S KI mice.To account for a certain compound specificity of LRRK2 activity inhibition we also treated D1994S KD mice and their wild-type controls.H-1152 treatment did not affect motor activity in any genotypes, consistent with the lack of motor abnormalities in these animals.Finally, we evaluated in vivo on-target engagement of H-1152 by measuring LRRK2 phosphorylation at Ser935 in ex vivo samples of the striatum and cerebral cortex obtained from 12-month old G2019S KI and WT mice.A decrease of LRRK2 phosphorylation was observed in striatum, but not cerebral cortex, at 20 min after administration of 1 mg/kg H-1152 but not later time-points.Contrary to G2019S KI mice, no effect of H-1152 on LRRK2 phosphorylation was detected in striatum or cerebral cortex at 20 min after administration in WT mice,To confirm that the results obtained with H-1152 are due to LRRK2 inhibition and not other off-target kinases, we employed a second small molecule ATP analog inhibitor, Nov-LRRK2-11, which has been recently shown to be brain penetrant and reasonably selective.We first tested Nov-LRRK2-11 in vitro for its ability to inhibit LRRK2 Ser935 phosphorylation.Nov-LRRK2-11 resulted in being very potent at reducing LRRK2 phosphorylation in NIH3T3 cells, with IC50 of 0.38 nM.Next, we assessed the compound in vivo.Nov-LRRK2-11 did not induce any obvious behavioral change on immobility time and step number in WT mice.However, Nov-LRRK2-11 acute treatment phenocopied the motor inhibiting effects of H-1152 in G2019S KI mice.Nov-LRRK2-11 was ineffective at 1 mg/kg, and induced a rapid increase in immobility time and reduction of stepping activity at 10 mg/kg.These effects were shorter lasting than those of H-1152, since stepping activity was normalized and immobility time only mildly elevated at 360 min after administration.Nov-LRRK2-11 caused a delayed reduction of rotarod performance in WT mice at 1 and 10 mg/kg, the latter dose inducing a more rapid effect.In G2019S KI mice, the lower dose induced a response that was superimposable to that observed in WT, albeit more rapid in onset.Behavioral data were in line with pharmacokinetic data.In fact, following an oral dose of 3 mg/kg Nov-LRRK2-11, brain and blood concentrations were maximal at 1 h, and only minimally detected at 4 h; at 24 h, compound levels were below detection.To confirm in vivo LRRK2 targeting, we measured LRRK2 phosphorylation at Ser935 30 min after Nov-LRRK2-11 administration in 12-month old G2019S KI and WT animals.Nov-LRRK2-11 markedly reduced LRRK2 phosphorylation in the striatum and cerebral cortex of G2019S KI mice as well as WT mice.Interestingly, in the cerebral cortex but not striatum, pharmacological blockade of LRRK2 kinase activity reduced protein LRRK2 levels.Finally, pSer935 LRRK2 and endogenous LRRK2 protein levels were monitored in 12-month-old G2019S KI and KD mice in comparison with their WT controls.pSer935 LRRK2 levels in striatum and cortex, as well as LRRK2 protein levels in striatum were similar across genotypes.Likewise, similar levels of LRRK2 were found in the cortex of G2019S KI and WT mice.D19994S KD levels were also in the same range of G2019S KI mice, although lower than those found in their littermates.The present longitudinal phenotypic study provides genetic evidence that expression of the G2019S mutation under the endogenous promoter confers mice with better motor performances and preserves their age-related motor decline in tests specific for akinesia/bradykinesia.Enhancement of LRRK2 kinase activity likely underlies this phenotype since D1994S KD mice do not display motor abnormalities, and ATP-competitive LRRK2 inhibitors reversed motor phenotype in G2019S KI mice.This study challenges the idea that G2019S is detrimental for motor activity in rodents, suggesting that other factors might be involved in inducing a PD-like phenotype, such alpha-synuclein or parkin.The possibility that the hyperkinetic phenotype of G2019S KI mice might reflect a pre-symptomatic stage of PD needs also to be explored.Finally, but not less important, this study also describes for the first time a correlation between in vivo motor effects of LRRK2 inhibitors and their ability to de-phosphorylate LRRK2 at Ser935, suggesting the G2019S KI mice may represent a valuable in vivo model to screen for LRRK2 inhibitors.
The leucine-rich repeat kinase 2 mutation G2019S in the kinase-domain is the most common genetic cause of Parkinson's disease. To investigate the impact of the G2019S mutation on motor activity in vivo, a longitudinal phenotyping approach was developed in knock-in (KI) mice bearing this kinase-enhancing mutation. Two cohorts of G2019S KI mice and wild-type littermates (WT) were subjected to behavioral tests, specific for akinesia, bradykinesia and overall gait ability, at different ages (3, 6, 10, 15 and 19. months). The motor performance of G2019S KI mice remained stable up to the age of 19. months and did not show the typical age-related decline in immobility time and stepping activity of WT. Several lines of evidence suggest that enhanced LRRK2 kinase activity is the main contributor to the observed hyperkinetic phenotype of G2019S KI mice: i) KI mice carrying a LRRK2 kinase-dead mutation (D1994S KD) showed a similar progressive motor decline as WT; ii) two LRRK2 kinase inhibitors, H-1152 and Nov-LRRK2-11, acutely reversed the hyperkinetic phenotype of G2019S KI mice, while being ineffective in WT or D1994S KD animals. LRRK2 target engagement in vivo was further substantiated by reduction of LRRK2 phosphorylation at Ser935 in the striatum and cortex at efficacious doses of Nov-LRRK2-11, and in the striatum at efficacious doses of H-1152. In summary, expression of the G2019S mutation in the mouse LRRK2 gene confers a hyperkinetic phenotype that is resistant to age-related motor decline, likely via enhancement of LRRK2 kinase activity. This study provides an in vivo model to investigate the effects of LRRK2 inhibitors on motor function. © 2014.
474
Evaluation method for process intensification alternatives
Decisions are regularly taken, either in the scientific, industrial or commercial activities, in order to find optimal conditions, re-design an equipment towards overall plant performance improvement, purchase new technology, and other situations.Maximising profit is usually the reason for doing any of the above listed tasks.When a new technology or product is considered for the substitution of an existing one, it is necessary to compare both considering specific aspects.Perhaps the biggest difficulty found in most cases is the integration of various technical, economic and environmental indicators, as well as quantitative and qualitative information.Most existing methods found in literature have a wide range of complexity and transparency; which strongly determines whether its practical implementation is feasible, or adopted with less resistance by the specific industrial sector or scientific community.Process Intensification concepts have gained attention in disparate chemical engineering activities.Its goals are related to new, sustainable and efficient ways for the manufacturing of chemical products .In short, innovative principles in both process and equipment design are introduced as long as they can lead to significant improvement in process efficiency, product quality, and reducing waste streams.Naturally, the decision of “intensifying” a process, which means changing something in the existing plant or technology, demands a deep analysis and rigorous decision process .PI strategies can vary depending on the field of chemical engineering besides PI, such as Process System Engineering, where different approaches have been identified: Structure, Energy, Synergy and Time .In the same paper, the following principles have been postulated: maximizing the effectiveness of intra- and intermolecular events; giving each molecule the same processing experience; optimizing the driving forces and maximizing the specific areas to which these forces apply; maximizing synergistic effects from partial processes.These principles and approaches can be applied at different scales, from the molecular processes, passing through microfluidics, to macroscale, and up to the megascale .Process integration strategies can be useful for intensifying process in a broader concept related to PSE, e.g. modelling, optimisation, control, etc. .In the cited paper, a division into two categories has been made: unit and plant intensification.A mathematical formulation for each intensification process was proposed considering the intensification of existing units as well as the installation of new ones.The applicability of this model was presented in the same paper cited, as a very elaborated case study that we also employ later in some examples given.The challenge in designing sustainable processes due to scarce information, and in a format that can be understood by both chemists and engineers has been previously identified .Inspired by green chemistry principles , techno-economic analysis and environmental life-cycle assessment, a methodological tool was proposed for early stage multi-criteria assessment and used in the evaluation of key process development decisions for novel production of renewable fuels and bulk chemicals .Existing in-depth analyses tend to be based on data difficult to collect and consume significant amounts of time, particularly when referring to downstream processing, normally unknown during early design phases of laboratory or scaling-up .There are professional softwares and qualitative assessment techniques such as Aspen Icarus Process Evaluator, E-factor, GME, EcoScale, ProSuite, BASF eco-efficiency and the Sustainability Consortium Open IO that help in such calculations.Most of these methods are information intensive, and require time and resources for its collection .The method we introduce in this work, was initially designed to make comparisons particularly in academic settings, and later was expanded for real-life scenarios.In a first approximation, there is no need to include cost considerations, yet, as demonstrated in several cases in this paper, it can be easily added to assist in a decision-making process, where reliable and time-efficient assessment at different stages of a project are of relevance.This method is a simple evaluation tool that could provide a relatively fast assessment in the form of a “number” to allow the discussion in a team of experts, or to convince “outsiders” of the benefits or drawbacks of a new proposed chance.This method is not intended to be used for optimisation in the current form, which requires proper validation and is out of the scope of the present study.Such validation can be possible if relevant and sufficient data of existing plants is made available, and a proper long-term study can be carried out to evaluate whether the implementation of the intensified solution was indeed better.We look forward to research or innovation teams that would like to join efforts in this respect in the future.Economical constraints are the main hurdles for the adoption of any new project.In practice, there are difficulties in quantifying the “improvement” of independent factors not necessarily interrelated or connected to cost.This is also the case when trying to combine “qualitative” aspects such as safety, overall impression, e.g. better-worse.An index defined as the ratio of the total costs of raw materials used in the process with respect to the value of all the marketable products and co-products at the process end, has been identified as the simplest, yet incomplete approach for assessing the economic viability of chemical processes .This index is one component of a screening method based on a multi-criteria approach allowing quantitative and qualitative proxy indicators for the description of economic, environmental, health and safety, as well as operational aspects tailored for an integrated biorefinery concept.The authors have defined the following indexes: EC, Economic constraint; EI, Environmental impact of raw materials; PCEI, Process costs, and environmental impacts; EHSI, Environmental-Health-Safety index; RA, Risks aspects.These categories could be evaluated as part of an early-stage sustainability assessment as favorable or unfavourable with respect to its petrochemical counterpart.Other authors have proposed a complementary view of PI based on the concepts of local and global intensification .Local PI stands for the classical approach based on using techniques and methods that improve drastically the efficiency of a single unit or device.The drivers of local PI are primarily technical although there are other “drivers” as efficiency, cost, ecological impact, productivity or yield.Their proposed global method focuses on the calculation of the efficiencies for different extensity values of units or steps.Similarly, a multi-objective decision framework relying on data available at early design stages was introduced before .It includes reaction mass balances, raw materials and products prices, environmental impacts of the life-cycle as a cumulative energy demand and greenhouse gas emissions of the feedstocks, physicochemical properties of reactants and products, as well as existing hazards .This method was adjusted for the production of bio-based chemicals, after including pretreatment of biomass, distribution of environmental burdens by product allocation, number of co-products, risk aspects and comparing processes with the petrochemical equivalents.It has five sustainability indicators: economic constraint, environmental impact of raw materials, process costs and environmental impact, Environmental-Health-Safety index and risk aspects; which are lumped into a score index with weighting factors that provides a comparison between all process alternatives.For the calculation of this index, scores are normalized by the worst score of the two processes under comparison.Weights based on the opinion of experts are assigned taking into account economic feasibility on a commercial scale; long term sustainability together with environmental impacts as low as possible; short term or immediate hazards; and risk aspects for decision makers.Summing the normalized values for all indicators yields a single index or total score for both processes under analysis, which are later compared by its ratio.For values <1 the new provides benefits over the traditional.Other methodologies for the local or global intensification have been reported, demonstrating the richness and complexity of this topic .As discussed before, the integration of technical and non technical types of factors is a difficulty many engineers and scientists have faced.This might be the reason why many have prematurely abandoned PI solutions.In this paper, we present a method to calculate a single “number or value” which shares elements of existing indexes or methodologies, but is simpler than those we found in literature.As will be seen in the several examples given in the following sections, we apply the proposed method and illustrate how it can be used by experts.Particularly, we see a great relevance as a tool in the process intensification discipline.The method has also been tested for two consecutive years as part of the Process Intensification Principles course that one of the authors teaches at the University of Twente.Taking the students as “outsiders”, the explanation of this method, and its application in academic settings has shown certain advantages.The most important, is that they come to realise how difficult is to take decisions when faced with choosing among innovation or intensification strategies, specifically when there is more than one solution to a particular problem.The strongest feature of our proposed Intensification Factor is its simplicity in arithmetic operations, and the possibility to get a “value” even when detailed information is not available at early or advanced stages in a project.As it will be seen in the following sections, this tool can be used in combination with already existing methods, expanding the toolbox and methodologies engineers and scientists require.After presenting the method, we provide several test cases and discussions to illustrate how the method can be applied in practice in Section 3.The IF is composed of modular interchangeable evaluation criteria or factors.A convenient aspect is the possibility to combine qualitative and quantitative factors.We envisage this IF number as a tool that can assist in the decision making process at different levels, such as at the laboratory when researchers try to compare one setting or feature change, at the plant or equipment level in PI or PSE, but also at the managerial, consumer/commercial level.The individual factors can be as many as needed, or based on the available information.We consider that with this approach there is no “focusing limit” for the application of this tool; it can be applied at all scales in the PI strategy, e.g. molecules, structures, unit, PSE, etc, and there is freedom to couple the qualitative aspects to costs when required.In a hypothetical plant, there can be different processes, units or even independent equipment needing intensification or improvement of any of its parameters.A given F can be the operation time, the yield of a given reaction, or the residence time through a reactor to allow a reaction to occur.For a given factor F, we have as input data its initial value Fb, the value after the modifications Fa.An exponent will serve our method in two ways: first, the sign will be determined whether a decrease or increase in F is beneficial; second, its absolute value will be taken as a weight factor that will depend on its importance with respect to the final goals of the intensification strategy.Table 1 illustrates the steps and required values in order to obtain an individual impact factor IF.From a mathematical point of view, an almost obvious limitation of this method can be found when a zero value appears at the denominator, or annulation if on the numerator.In practice this limitation can be circumvented.For example, where Temperature or Pressure values are used which in some scales can reach “zero”, a different scale could be used.A step-by-step procedure utilising the proposed method is illustrated in Fig. 2.The objectives and weights need to be found, depending on the particular situation.In those cases that performing an experiment is not possible, or lack of data do not permit the “after” assessment, the experts should guesstimate and reach a consensus.For example, the variables or factors could be specific variables associated with economy, safety, control, etc.In the following sections we will provide a number of cases to illustrate the way to apply the method described above, a discussion of these cases is given in Section 4.The oscillatory baffle reactor was introduced as a novel form of continuous plug flow reactor, where tubes are fitted with constriction orifice plate baffles equally spaced .The baffles are shaken in an oscillatory manner, in combination with the flow of the process fluid.It has been employed for the conversion of a batch saponification reaction to continuous processing that resulted in a 100-fold reduction in reactor size, greater operational control and flexibility .The greatest driver for making this a continuous processing reaction was safety because continuous operation could reduce considerably solvent inventories.Furthermore, operating at a lower temperature of 85 °C, closer to the ambient pressure boiling point of the solvent, had a positive impact in safety.This new temperature could also be associated to energy savings, combined with improved heat transfer of the new reactor design.Among the several advantages, the size reduction helped decreasing the residence time, operation costs and down-time.A conceptual industrial-scale unit, with 20-pass, 500 l OBR has been reported to produce continuously at a rate of 2 T/h assuming 15 min mean residence time .The factors used for the IF calculation of this test-case are Temperature, Pressure, Volume and Residence time, which are listed in Table 2.Since a decrease in Temperature is desired, the d value is taken as positive.We have assumed that a decrease in pressure is desired due to safety and costs, that is why the IFpressure is less than one and it decreases the IFtotal value.But a new IF number could be calculated to assess how much better would it be to operate at a higher pressure if desired, for example when the reaction kinetics would benefit from it.Similarly, for Volume and Residence time the d value is 1, since is desired to work with less inventories.The final IF is 19.44>1 meaning that the new proposed reactor has an overall positive performance.If desired, we could have added a “Safety” driver, for which experts would need to assign values for each alternative; either based on available experimental data or based on an arbitrary scale.The strength of this “value” will be more evident when we compare in other examples more than one alternative.Several chemical and physical effects caused by ultrasound are a result of cavitation, the formation and collapse of bubbles in a liquid exposed to oscillating pressure fields .These type of reactors are widely used in laboratories and industrial applications, but the analysis and comparison of results obtained with them are notoriously difficult, which has limited the scaling up of sonochemical reactors in industry .We present here two types of reactors in which the use of artificial microscopic crevices improved the energy efficiency values for the creation of radicals .The new bubbles created with ultrasound emerge from the artificial crevices and provide a larger amount of radicals together with several other phenomena.Sonochemical effects such as radical production and sonochemiluminescence were among the intensified aspects.The energy efficiency value XUS is calculated as the product of the energy required for the formation of OH radicals and the rate of radical production, divided by the electrical power input.With three small crevices or pits, 10 times higher energy efficiencies were reached in a micro-sono-reactor .The same principle was scaled-up, now labeled Cavitation Intensification Bag, and applied in the operation of conventional ultrasonic bath technology having ∼900 crevices .The μSR and CIB concepts can be seen in Fig. 4.The CIB holds a volume 25 times bigger than the μSR, and provided a reduction of 22% in standard deviation of results.The variability of sonochemical effects is a serious issue to be solved for its appropriate commercialization in industrial settings.More important, an increase of 45.1% in energy efficiency compared to bags without pits was achieved.In Table 3 we compare three scenarios; the first is the microreactor at the highest power with the largest number of crevices against the unmodified reactor .The other two comparisons are modified and non-modified bags for two ultrasonic baths with different frequencies and power settings .Comparing both USs is a useful feature of this method that cannot be easily carried out otherwise .The exponent d is negative in all cases since higher XUS is desired.From these values we can observe that the highest intensification of radical production is achieved by the microreactor alternative.For the CIB cases, the apparently simple comparison among types of CIB, with and without pits when CIB in US2 has more energy efficiency overall.But our method becomes more important when looking at the different baths and using the CIB by calculating the IF.Looking at the final fraction, the comparable values means that the CIB with pits have an IF ∼ 1.4, and is independent from the US bath used.This is a very useful way to compare different intensification approaches.Other ways to illustrate the advantages this method with the same CIB is for cleaning applications.It has been reported that the Bags are efficient in the cleaning of 3D printed parts that need to be cleared of the support material, cleaning of microfluidic chips, and jewels in commercial settings .In Table 4, two examples are given for the calculation of the improved effect of using the CIBs quantified by the time needed for cleaning and the volume of liquid required for it.The first factor has a direct relationship with costs where the second has an additional environmental positive connotation, since the use of less liquid has a smaller environmental footprint.These numbers are of great importance for the evaluation and quantification of cleaning, which has been reported to be not only difficult, but of industrial relevance .With these numbers it could be also possible to compare different cleaning methods and equipment, in settings or activities outside of the academic interest.Up to this case we have compared only between two alternatives.This case offers the opportunity to compare among three different alternatives, for which we compare 1 vs 2, 1 vs 3 and 2 vs 3.Three different scenarios were compared for a campaign producing 5 tons of an isolated intermediate through a multistage organometallic reaction .The first scenario is the standard where the reaction is performed batch-wise, with six batch assets of equal size in series, each performing a specific task.The slowest step becomes the bottleneck which is the coupling reaction because it takes place at cryogenic temperatures to avoid side-product formation.The second scenario is a mix of continuous and batch processing, having the Li exchange and coupling reactions performed in a microreactor at the expense of an additional investment.As a consequence, the reaction temperature is increased to avoid long residence time, resulting in an increase in the overall yield and throughput for the coupling reaction.In this case, distillation is the bottleneck instead of the coupling reaction, but the workup operations remain the same.In the third scenario, labeled as process synthesis design, all reaction steps are made in continuous-flow operation, which has the advantage of further reducing the batch assets and the number of operators, nevertheless, higher additional investment is required.It is assumed that there is not further gain in yield and throughput.In this process the yield is preferred as high as possible because the cost of raw material is the dominant operating costs.The next largest cost is manufacturing, so the number of operators is preferred to be as small as possible, while throughput as high as reachable.We observe that a global IF based on those factors gives a simple indication of the reduction in operating costs, and therefore, increase in economic gain.We asume that any necessary additional investment, when annualised, is negligible compared to the former operating costs.When comparing the 2nd and 3rd alternative against 1, it is evident that 3 has higher IF value, corresponding to a better intensification of the whole change if 2 would be selected.This is quantified by 2 vs 3 calculation, where an IF of 1.4 is the result.Clearly, the larger IF value the higher the economic gain that is finally achieved, which is in agreement with the economical gain reported by .In a practical situation the number of factors might be much higher, and the possibility of talking about single numbers can be much more helpful in the decision making process.Only factors with the largest impact on the chosen figure of merit should be selected.The multiplicative nature of the factor F implies that after comparing case 1 vs 2 and 1 vs 3, it is not necessary to compare 2 vs 3.Hence, in practical situations such an extensive table might not be of use.For clarity purpose we have decided to include the three situations.From a study reported elsewhere , Portha et al. analyse several cases, from which we took an example to demonstrate another advantage of our simplified method.It has been argued that a direct link between intensified process and inherent safety is not always true.A global analysis of the process should be performed instead because small hold-ups could be sensitive to disturbances causing rapid changes within the process.As a result, safety and product-quality constraints would be affected before corrective actions are put in place.To select just one example, the comparison of benzene nitration in two alternatives scenarios, with the same objective of 96% conversion is taken here.The first scenario is a large CSTR whilst the second one having two small CSTRs, each reactor is equipped with a cooling jacket.When dynamics considerations are not included, the intensification principles would favour the two small reactors scenario, since the reduced inventory of dangerous materials and more compact and smaller equipment can be converted into lower capital cost and also less coolant; the latter due to larger cooling heat flux per reacting volume of the reactor.An increase to 120% and decrease to 80% with respect to the nominal value of 100% was calculated by the response of each configuration to a step in the benzene flow.Larger temperature deviations were found for the intensified scenarios due to the lower heat thermal capacity of the smaller reactors, which implies less robust process for dampening the heat released in the reaction.The relevant factors in this case having a d = 1 are the Volume, and Temperature deviations since they are all desired to be smaller in the intensified version.The original analysis made by the papers cited above can be contrasted with the calculated IF = 0.52, which means that the Intensified alternative seems not to be better than the current large CSTR.If the maximum temperature deviations that are allowed in order not to affect process safety and product quality were known, the final decision could be taken on a more justified basis.For instance, if the maximum temperature deviation observed in intensified case had negligible effect on process safety or product quality, the temperature deviation factor would be irrelevant in the comparison.From a philosophical and moral perspective, we are of the opinion that in cases such as the one just described, the value of the weights emerges as an important tool.A higher weight value could be given to those intensification factors that contribute to a higher safety, compared to those having an emphasis on the process performance.Since the assigning of values to given weights can only be possible by experts in the specific process, we prefer not to speculate about this.We base this example on a study of an approach mentioned in Section 1, focused on the reduction of process inventory .The process intensification approach considers the minimisation of the inventory for a given production, while classical superstructure optimisations consider usually profit maximisation, or total cost minimisation.It can be argued that such statement is true in a rather literal sense.Indeed, many superstructure optimisations do focus on cost minimisation; however in practice, the optimisation format easily allows for the replacement of one optimisation criterion by another one.Furthermore, having inequality constraints gives more flexibility in the problem formulation, such that is possible to add a larger number of constraints.For example, an upper bound on the inventory of a component in the process; which can be gradually lowered, and the optimiser can then start searching for intensified solutions.The authors studied the reduction of ethanol inventory for two weeks in an existing process for producing acetaldehyde via ethanol oxidation, while holding the same throughput of acetaldehyde.In the process, ethanol feedstock is vaporised, mixed with air and fed to a catalytic reactor.The reactor product is scrubbed first with cold dilute solvent and the bottoms of the scrubber are distilled in a first distillation column to recover acetaldehyde as distillate.In a second distillation column, organic wastes are collected from the top, and the bottoms are fed to a third distillation column where ethanol is separated as the overhead product to be used as fuel in a boiler.The key decision variables in the optimisation problem were the flow rates of ethanol as feedstock and solvent, reaction temperature, the reflux ratio of the third distillation column and the reboiler heat load in the first distillation column.The minimum ethanol inventory for two weeks was 7099 tons at a process yield of 0.315.Approximately 37% less the amount of ethanol stored for the base case was found.Also, the option of replacing the reactor and the third distillation column, while simultaneously intensifying the whole process, was also considered by the authors.The results indicate that the addition of new units did provide the same reduction of ethanol inventory than without adding new units.We select the key variables to illustrate how to apply our methodology.The exponent d of the reflux ratio factor was chosen 1 as the energy consumption of a distillation column increases with the reflux ratio.The base solution 1 is compared with a minimum inventory case 2, and minimum inventory case 3.The calculated IF values show how superior the case 3 is compared to 2, which would be hard to spot when only focusing on changing the existing case with 2 or 3 alone.We have opted for not comparing 1 vs 3 since F1−3 = F1−2 · F2−3.Our last example is based on a Heat Integrated Distillation Column seen as a energy-conserving unit .The HIDiC combines advantages of direct vapour recompression and diabatic operation at half of the normal column height.With such column the consumption of exergy at approximately the same capital cost is reduced by half with a very short pay-off time, compared to the usual vapour recompression scheme.A comparison of utilities consumption of four different designs of the base case propylene splitter is provided there, of which we select two HiDC cases to compare the 18/13 bar and 18/15 bar pressure setup of the rectification/stripping section.It is reported that less consumption of the exergy of the conventional vapour recompression system is achieved for the HIDiC with larger stripping section pressure.This is due to the change in the utility consumption from the 18/13 bar case to the 18/15 bar case.The required heat transfer area is the external surface area of the shell of the rectification section column.The increase in column weight is a result of the need to increase the heat transfer area between the stripping and rectification sections to compensate for a smaller temperature gradient between those sections.This is due to the larger operation pressure in the rectification section in the 18/15 case.In this case we will highlight a powerful feature of our method, in which we have broken the total number of factors to intensify in three sub-analyses.In the first one, the factors are column height, diameter, weight and bed volume, with which we calculate IFa = 0.71.The IFb = 1.43 is calculated from the transfer area, the tube diameter and pitch.Notice that if we would only have access to this information, we would select the 18/15 bar as the best alternative.When we include in our comparison the utilities consumption of these two designs IFc = 1.78, changing the 18/13 bar by the 18/15 bar has a IFtotal = IFa · IFb · IFc = 1.81 bigger than one, meaning the overall change of equipment is desirable.This example compares two intensification options, catalytic reactive distillation and absorption, for the production of biodiesel by esterification of waste oils with high free fatty acids content .The esterification of FFA with methanol produces FAME and water as by-product.This reaction is reversible, meaning that by using reactive separation technologies, water can be removed from the reaction medium as the reaction proceeds, allowing for the complete conversion of FFA, while obtaining high purity FAME with a single process unit.In the process based on catalytic reactive distillation methanol and FFA are fed to the reaction zone of the distillation column loaded with a solid acid catalyst.Methanol is consumed in the reaction zone, and as a consequence, a mixture of acid and water is easily separated at the top.After decanting, the acid rich phase is refluxed to the column, while water is obtained as distillate.High purity FAME is obtained from the bottom stream after removing methanol by additional flash.Since the reactive distillation column employs extremely low reflux, it behaves rather as a reactive absorption unit, and not as a real reactive distillation unit .Therefore, in the process based on catalytic reactive absorption no products are recycled to the column in the form or reflux or boil-up vapours.Table 9 shows key variables for the comparison between reactive-absorption versus reactive-distillation processes .We use these variables to study three intensification factors: the investment cost of the column, the overall cost of the heat exchangers and the operating costs.We define the intensification factor of the column as the ratio of column shell investment costs.The cost of a column shell depends on its weight, with a scale exponent around 0.85 .If we assume that the same thickness and material of the column will be used for the reactive absorption and distillation processes, the intensification factor is proportional to the ratio of column volumes.For the sake of simplicity we neglect the cost of column internals.The calculated column IF is 0.85, which indicates that the column for the reactive distillation case will be around 15% cheaper.Here we provide another real-life case, kindly provided by Oasen, a water utility company that uses sand filters, aeration, active carbon and UV disinfection for the production of drinking water using infiltrated surface water as source.The traditional process begins by obtaining filtrated water from the river bank, followed by one aeration step, a sand filter step, softening, another aeration and sand filter steps, passage through an active carbon filter, and a final ultraviolet disinfection step that renders the water drinkable.The log removal value is a very strong indicator for the removal efficiency of a particular component.The higher the LRV, the higher the drinking water quality, e.g. 1 LRV = 90% reduction of the target component, 2 LRV = 99% reduction, and so on) .Note that we have included in our analysis a factor of relevance, not only for the company, but also for the environment: sustainability."The sustainability factor is based on a qualitative decision, which means there are no “hard figures”, and it is just an overall impression of the sustainability OASEN's team opinion.For example, they are not using CO2 equivalents, which is less sustainable-oriented.The “+” symbol means that there is a positive perception; similarly, “+++” is a subjective assessment in which the process is assigned a numeral accordingly: +++ = 3.Here we provide a general discussion relevant for all cases analysed before.The first one we have identified is that there might be risks of considering a given factor or weight more than once.This could happen based on different terminologies used by experts in different activities or historical documents.Additionally, depending on how the information or “factors” are calculated, some hidden elements could inadvertently be left out.As a compensation, besides the obvious benefit achieved after normalisation of values by dividing each IF, the only action to reduce this possibility is to have a transparent database and ask for external auditing of the method.Taking as an example the OBR case, Pressure is a factor that could be considered important to “decrease” in one analysis, but the opposite could also happen.For this we think is useful to define two times an IF having both alternatives:When lower pressure is desired for safety reasons, d = 1.When the increase is needed for improved kinetics, d =−1.Alternative 1 would have an IFtotal = 1195 as calculated in Section 3.1, whereas the new IFtotal = 3443 of Alternative 2 would indicate a stronger argument to replace the existing equipment.Here the role of the analysts or experts comes as the most important decision step, deciding whether Safety is more “desired” or the improved kinetics alternative.If the experts would decide to include both alternatives for a more inclusive analysis, a new Intensification Factor could be calculated having different d values.As mentioned in Section 2.1, the choice of scales used for each F when calculating the IF value needs to follow some basic guidelines, such that it is invariant to a change of the physical scale in any of the performance factors Fi.If two intensification teams in two different locations are working on a same problem, there needs to be an agreement on which scale to use, for example in the case of temperature.To avoid these situations, it is necessary to use scales that have an absolute zero in a physical sense.The final discussion we want to emphasise is the last case where the importance of having all available information for a given analysis is evidenced.For the case of a total value IFtotal = IFa · IFb · IFc, having only the values corresponding to alternatives a and b would motivate the change of the second option.If the decision would have been based on the technical experts alone, the result would certainly be different to the situation in which the economical aspects c are included.We believe to have given sufficient evidence of the advantages of using a simple evaluation tool, based on a method for intensification factors calculation.Together with a step-by-step procedure, and examples extracted from scientific literature, as well as from industrial practice, we think the reader can start applying this tool to his own problems.This method has been employed in pedagogical settings while teaching a Process Intensification Principles course at the University of Twente.The students have managed to understand better the advantages of intensifying a given process by making use of this simple method.Another important argument is that this method might seem superfluous to experts who have worked for many years in intensification or innovation of chemical processes.However, for outsiders or non-experts on a particular process to be improved, we believe our proposed method comprises very simple mathematic operations that can be understood by most educated persons without a specialisation in chemical engineering.For example, in companies such as small and medium enterprises, spin-offs or other multidisciplinary settings, normally there is only one expert; convincing other non experts from marketing, finances, etc., is a challenge we have aimed at resolving with this method.Our simple method rests on the value assignment of two exponents: ci and di.The first leads to a “base case” at the beginning of a project, when all ci = 1, and such IFtotal value can be used as a benchmark for improvements in advanced phases of the specific project.The di allows to express when the increase or decrease of a given factor is desired or not.More limitations besides those hinted in this work will be found as the method is tested in real life scenarios.Identifying the weak aspects and improving them, such as increasing the analytical power, will be more efficient as other colleagues use it and their findings are reported.Practice will tell if this simple method is of use beyond what the authors have already identified and reported here.We are aware that it has already been used by a spin-off company, BuBclean, VOF, The Netherlands, to report to their clients and in subsidies proposals.Similarly, OASEN BV, The Netherlands, has used the method and compared the result of using this method with an existing business case employed for the decision of building a new plant.As a follow-up for this paper, we have created a group in “LinkedIn” as a means to open a discussion where academic and industrial scientists share their experiences in using this method.The title of the group is Intensification Factor initiative, its weblink can be found in the link https://www.linkedin.com/groups/7062911.We expect experts from different communities to share their ideas and experiences to test the validity of this method.DFR is co-founder of BuBclean, and has no financial interest in it; WvdM is CEO of OASEN, and no conflict of interest has been identified.
A method for the comparison of scenarios in the context of Process Intensification is presented, and is applied to cases reported in the literature, as well as several examples taken from selected industrial practices. A step by step calculation of different factors, all relevant in the chemical engineering and cleaning processes is also given. The most important feature of this new method is the simplicity of arithmetic operations, and its robustness for cases where there is limited information to provide a good assessment. The final calculated value, the Intensification Factor, provides an interesting decision-making element that can be weighted by experts, no matter which level of detail or the particular activity is considered (economical, technical, scientific). Additionally, it can contain as many quantitative and qualitative factors as there are available; they are all lumped into a number with a clear meaning: if larger than one the new alternative is superior to the existent; if is smaller than one, the opposite applies. The proposed method is not to be considered only as a tool for experts in the specific process intensification discipline, but as a mean to convince outsiders. Also, it can be used in educational settings, when teaching young professionals about innovation and intensification strategies. A discussion forum has been created to evaluate and improve this method and will be open to professionals and interested researchers that have read this paper.
475
Validation of hippocampal biomarkers of cumulative affective experience
The general public have long been concerned with the welfare of laboratory, farm, zoo and companion animals.This concern stems from the assumption that, like humans, many other animals can consciously experience affective states.However, because the subjective conscious component of affective states described by humans as ‘feelings’ is difficult to assess in non-verbal animals, welfare researchers usually focus on measuring the physiological and behavioral components of affective states.Therefore, following Paul and colleagues, when we use the terms ‘affective states’ and ‘affective experience’, we refer only to objectively measureable physiological and behavioral responses.Furthermore, we adopt the common two-dimensional model of affect in assuming that core affective states can be described as combinations of the valence and intensity of the animals’ experiences.Traditionally animal welfare science has been mostly concerned with the current short-term affective state of an individual resulting from a particular event.More recently, however, emphasis has shifted towards the lifetime experience of animals, reflected in the related concepts of ‘quality of life’ and ‘cumulative experience’, and this shift is now reflected in legislation regulating animal use in science and recommendations for the farming industry.Cumulative experience can be defined as the net impact of all the events that affect the welfare of an animal over its lifetime, be it negatively, positively, and/or by way of amelioration.In order to avoid confusion with the non-affective definition of experience, we will henceforth refer to this as cumulative affective experience.The shift in concern in the field of animal welfare research from acute to cumulative affective experience raises the question of how to measure the latter.For regulatory purposes, cumulative affective experience is currently assessed using crude objective physical indicators such as body weight, that lack sensitivity for detecting subtle changes in welfare, and clinical impression, which is subjective and open to disagreement.Other proposed methods also suffer from limitations.One potential solution might be to record all the putatively positive and negative stimuli that an animal has been exposed to over time and add these up to produce a measure of cumulative affective experience.However, different animals respond very differently to the same stimuli, and the Pickard report reached the conclusion that ‘there is no mathematical way of integrating all positive and negative events in an animal’s life.We therefore need an indicator that reflects each individual animal’s response to the stimuli to which it has been exposed, rather than a record of the stimuli themselves.Here, we use recent evidence to argue that such a marker of cumulative affective experience can be found in the brain, and more specifically in the hippocampus, a well-studied brain area involved in learning, memory, and stress regulation.Following a general introduction to the mammalian hippocampus, we will explore the criterion, construct, and content validity of these hippocampal biomarkers as indicators of cumulative affective experience in mammals.We will then discuss confounding factors and propose potential strategies to control for them.Finally, we will present preliminary evidence supporting the potential of these hippocampal biomarkers for assessment of cumulative affective experience in non-mammalian species as well.We will conclude with some practical considerations for implementing these markers in various settings.The mammalian hippocampal formation is a bilateral, oblong, forebrain structure.The hippocampus can be subdivided into three anatomically distinct fields visible in cross section: the subiculum, the cornu ammonis and the dentate gyrus.Recent evidence drawn from gene expression, anatomical, and functional connectivity studies indicates that the hippocampus can be further subdivided into three main regions along its longitudinal axis.Since the anterior hippocampus in primates is homologous to the ventral hippocampus in rodents, and the posterior hippocampus in primates is homologous to the dorsal hippocampus of rodents, we will henceforth refer to these subdivisions as anterior/ventral and posterior/dorsal respectively.While the hippocampus is perhaps better known for its role in learning and memory processes, it also plays a central role in emotional regulation.One way the hippocampus regulates acute affective experiences is by applying strong negative feedback to the hypothalamic-pituitary-adrenal axis, a central component of the stress response system.The activation of the HPA axis by a stressor induces the release of glucocorticoid hormones into the circulating blood.Following termination of the stressor, glucocorticoid concentrations slowly decrease to pre-stress levels and this recovery is regulated by negative feedback of glucocorticoids onto their receptors in the brain, especially in the hippocampus.The hippocampus therefore exerts regulatory control over the HPA axis.Behavioral studies indicate that the two main functions of the hippocampus, learning and memory, and emotional regulation, are spatially segregated, with the posterior/dorsal part of the hippocampus mainly involved in learning and memory, and the anterior/ventral part mainly involved in affective experiences.Evidence supporting this spatial segregation comes from studies showing that lesions in the ventral hippocampus of rats impair defensive fear expression but not spatial memory, while lesions in the dorsal part have the opposite effect.This functional segregation is supported by different anatomical connectivity, with the anterior/ventral hippocampus being mainly connected to brain regions involved in emotional regulation and the posterior/dorsal region being connected to brain regions involved in spatial memory.The hippocampus, and more specifically the dentate gyrus, is one of only a few brain regions where neurogenesis, the birth of new neurons, occurs throughout postnatal life in the healthy mammalian brain.The rate of neurogenesis varies between species and the existence of adult neurogenesis has been questioned in some mammals, especially humans.However, the dominant opinion is that the claim that neurogenesis occurs in adult humans at a functionally-relevant rate is robust.New neurons are born in the dentate gyrus where they mature and become functional, growing axons that connect to other hippocampal subdivisions.An increasing number of studies suggests that neurogenesis plays an important role in learning and memory and also in emotional regulation.Furthermore, the spatial segregation of learning and memory vs. affective experiences observed at the level of the whole hippocampus is suspected to be mirrored by a similar segregation of the role of new neurons according to their birth place.The hippocampus is not only involved in regulating the stress response, it is also very sensitive to the effects of stress.In particular, two macroscopic and two microscopic categories of hippocampal biomarkers have been shown to be sensitive to stress.The microscopic categories, which are quantified post-mortem, are the rate of neurogenesis, defined as the rate of precursor cell proliferation and/or the rate of new neuron incorporation, and the structural characteristics of mature neuronal cell bodies.Macroscopic biomarkers, which reflect without distinction the two microscopic biomarkers at a larger scale, are the size of the hippocampus, and the local amount of grey matter in the hippocampus.The local amount of grey matter is typically measured in-vivo, using magnetic resonance imaging, while hippocampal volume can be measured in-vivo or ex-vivo.Validation of new markers usually requires the establishment of three different types of validity: 1) criterion validity, which examines the correlation between the new marker and a pre-existing marker considered to be the current gold standard, where such exists; 2) construct validity, which shows whether a marker follows relevant theoretical assumptions of the phenomenon of which it is a marker; and 3) content validity, which refers to the extent to which a marker encompasses all facets of a given construct.The distinction between construct and content validity is to some extent artificial.Indeed, if encompassing all the facets of the construct is considered one of the theoretical assumptions the marker should fulfil, content validity becomes a sub-type of construct validity.We follow the approach of combining the assessment of construct and content validity.Establishing criterion validity requires comparison of the new marker to a pre-existing marker considered as the current gold standard, where such exists.However, there is currently no gold standard method for measuring cumulative affective experience in non-human animals.We therefore turn to data from humans to explore criterion validity.Two psychological constructs closely related to cumulative affective experience in humans are self-esteem and subjective psychological well-being.Subjective psychological well-being is a self-report measure of life satisfaction driven by autonomy, environmental mastery, personal growth, positive relations with others, purpose in life and self-acceptance.Self-esteem is a broadly defined personality variable referring to the degree to which an individual values and accepts him or herself and is a strong predictor of subjective psychological well-being.One would thus predict hippocampal biomarkers to correlate with these constructs in humans.In accordance with our predictions, subjective well-being and self-esteem are positively correlated with hippocampal volume.Thus, there is evidence to suggest that hippocampal biomarkers correlate with psychological concepts in humans, which are close to the concept of cumulative affective experience.Another related concept in humans is mood.Moods are usually considered to be long-lasting affective states resulting from the integration of positive and negative acute experiences over time.Although the time window in which mood and cumulative affective experience integrate acute experiences could be different, the construct of mood is very close to that of cumulative affective experience.In humans, moods can be verbally reported and systematically assessed via structured questionnaires.Several meta-analyses have shown that both hippocampal volume and the local amount of grey matter in the hippocampus are consistently lower in patients suffering from two clinically-defined mood disorders: major depression and post-traumatic stress disorder.Longitudinal studies have also shown that various mood-improving treatments induce an increase in hippocampal volume and the local amount of hippocampal grey matter in depressed patients.Human data therefore show that some of our proposed hippocampal biomarkers co-vary with long-term affective states, as would be required in order to establish criterion validity.Moods are difficult to assess objectively in non-verbal animals.However, neuroscientists have developed several behavioral tests that are argued to measure mood in laboratory animals.These tests have been validated using drugs shown to have clinical efficacy in treating human mood disorders.Using such tests, hippocampal volume, local amount of grey matter, and neurogenesis rate have all been shown to be reduced in rodent and macaque models of depression and anxiety, and to be increased by anti-depressant drugs.Furthermore, increased neurogenesis has been shown to be necessary to observe some of the behavioral effects of anxiolytic and anti-depressant treatments in stressed animals.Although suppression of neurogenesis achieved by transgenic manipulation or irradiation has been shown to be sufficient to induce depressive behavioral symptoms in some studies, in many others it did not result in any changes in depression-like behaviors, unless further stressors were also experienced.The most recent hypothesis on the causal role of hippocampal neurogenesis in depression and anxiety symptoms is that low levels of neurogenesis make an animal more sensitive to environmental stressors.It is therefore clear that conditions that lead to changes in mood-like states in animal models of depression and anxiety also change hippocampal biomarkers, and the current thinking is that the hippocampal structures are involved in mediating this mood response, either directly or indirectly.In summary, there is robust evidence from non-human mammals, and to a lesser extent from humans, for a strong association between changes in mood and hippocampal biomarkers.This evidence demonstrates the criterion validity of hippocampal biomarkers for the assessment of cumulative affective experience.Construct validity refers to the extent to which a marker follows theoretical assumptions of the construct it is proposed to reflect.A good marker of cumulative affective experience should fulfil the following assumptions: 1) it should respond to a wide range of events inducing changes in enduring affective states and co-vary in opposite directions with events inducing positively- and negatively-valenced experiences; 2) it should reflect the affective response of each individual to an event, rather than the objective event itself; and 3) it should integrate discrete experiences over time.A large number of studies in different mammalian species have measured the impact on the hippocampus of events known to induce a negative long-lasting affective state.In humans, meta-analyses have consistently found associations between psychological trauma and a smaller hippocampal volume or local amount of grey matter.In non-human species, the vast majority of studies revealed that the macro and microscopic hippocampal biomarkers decrease with chronic exposure to a variety of aversive events such as restraint, social defeat, social isolation and maternal neglect.While most human studies are correlational, experimental animal studies have demonstrated the causal role of chronic stressors in decreased hippocampal volume, local amount of grey matter, neurogenesis rate and size of cell bodies and dendritic trees.Cumulative affective experience does not have to be only negative.Consequently, to possess content validity, markers of cumulative affective experience must co-vary with exposure to events inducing negative and positive experiences in opposite directions.The case for negative experiences decreasing the hippocampal biomarkers we are reviewing here is strong and incontrovertible.However, crucially, these same hippocampal biomarkers have also repeatedly been found to increase when individuals were chronically exposed to events known to induce positive affective states.Systematic reviews and meta-analyses have shown that voluntary physical activity and mindfulness meditation, which are both associated with positive effects on mood in humans, increase hippocampal volume in human subjects.Results from randomized controlled trials show that exposure to events inducing enduring positive affective states cause the changes in hippocampal biomarkers.In rodents as well, sexual behavior, voluntary physical activity, and cage enrichment, which are well-established rewarding events, have been experimentally shown to consistently increase rodent hippocampal volume, neurogenesis rate, and the size of the dendritic tree and the spine density of hippocampal neurons.In marmosets, cage enrichment has also been shown to enhance the length and the complexity of the dendritic tree of hippocampal neurons.In rodents and non-human primates, diverse events inducing enduring positive affective states have thus been shown to cause an increase in the different hippocampal biomarkers.The effect of exposure to aversive events on the hippocampus is known to be mediated, at least partially, by high levels of corticosteroids.However, chronic exposure to rewarding events is also known to be associated with an increased concentration of circulating corticosteroids and in one rodent study, corticosteroids were found necessary both for the effect of cage enrichment on neurogenesis enhancement, and for the effect of chronic stress on decreased neurogenesis.These studies indicate that the relationship between corticosteroids and the hippocampal biomarkers is complex and illustrate neatly that measuring a change in corticosteroid levels will not predict the direction of a change in hippocampal biomarkers and affective state valence.The affective reaction of an individual, whether conscious or not, depends not only on the event itself but also on characteristics specific to the individual, including its genotype and its previous experiences, both of which may affect how the individual responds to a given event.Using the number of events known to have the potential to induce a change in affective experience as a proxy for cumulative experience of an individual can thus be inaccurate.A good marker of cumulative affective experience should reflect the impact of events at the individual level; in other words: the individual’s response to the events.In support of this assumption, hippocampal biomarkers have been shown to depend on the interaction of exposure to aversive or rewarding events with genetic variants in humans and mice.This suggests that different individuals respond differently to the same events, and that hippocampal biomarkers reflect the response, rather than the event.A more direct way to test whether a marker reflects the response of each individual is to consider studies in which exposure to an event induced inter-individual differences in the behavioral reaction of the subjects; in such studies, a marker of cumulative affective experience should track these individual differences in response.Accordingly, the neurogenesis rate in the hippocampus of mice was found to negatively correlate with the behavioral manifestation of stress expressed by each individual, despite the fact that all individuals had been experimentally exposed to the same stressful events.Depending on their intensity and duration, as well as the genotype and previous experiences of the individual, some affective experiences leave a long-lasting trace and thus have the potential to accumulate over time, while others do not.To track cumulative affective experience, hippocampal biomarkers should thus be sensitive to the experiences leaving lasting traces and have the potential to integrate these affective experiences over time, on the same time scale as the affective experiences themselves.Therefore, if the animal completely recovers from an affective experience, and does not experience any lasting effects, we should also not expect a trace in the hippocampal biomarkers.On the other hand, if there are long-lasting changes in affect, then there should also be long-lasting changes in hippocampal biomarkers.Several studies in humans and rodents have shown that effects of affective experiences on hippocampal biomarkers can be long lasting.One set of examples where the long-lasting effect of affective experiences is very clear is for stressors experienced during development.For instance, effects of early life experiences on hippocampal biomarkers have been detected in adult rats, macaques and humans.It is worth noting that lab and farm animals are rarely given the opportunity to live beyond adolescence/young adulthood.In these instances, hippocampal biomarkers will very likely track their cumulative experience over their entire lifetime.Some could argue that stressful events experienced during development have qualitatively different effects on the brain than those experienced during adulthood.In order for hippocampal biomarkers to be useful markers of cumulative affective experience in any situation/independently of the age of the subject, they have to integrate events encountered during adulthood as well.This cumulative property of hippocampal biomarkers is supported by several studies showing that they correlate with the duration or number of discrete acute experiences.For example, in adult rats and mice, neurogenesis increases with the duration animals spend voluntarily running in a wheel over 4–8 weeks.The cumulative nature of the hippocampal biomarkers is also confirmed by longitudinal data demonstrating dose effects within subjects.For instance, the local amount of grey matter in the hippocampus decreases with the number of stressful events adult human participants experienced over the last three months; the volume of the hippocampus increases with the duration of the physical exercise program older human participants were enrolled in; and the whole hippocampal volume decreases proportionally to the number of days or weeks rats have been exposed to a stressful paradigm.To integrate positive and negative experiences over time, markers should not just be long-lasting, but positive and negative experiences should also be able to cancel each other out.Other experimental studies in animal models have indeed shown that the hippocampal biomarkers reflect the net effect of combinations of events that provoke affective states of opposite valence within the same individuals.It should therefore definitely be possible to assess cumulative affective experience if we restrict ourselves to measuring changes in hippocampal biomarkers over short periods.When changes are measured over longer periods, we should be aware that an absence of significant differences might potentially be attributable to the potential lack of sensitivity of the method to experiences that happened during adulthood but a long time ago.Therefore, these studies indicate that the ability of the hippocampal biomarkers to integrate discrete affective experiences over time does not seem to be restricted to a specific period of life.Hippocampal biomarkers have been found to be sensitive to the accumulation of discrete experiences occurring during childhood as well as during early and late adulthood.The exact length of the integration window needs to be studied in more detail, but it is clear that in some instances, it can be very long: the hippocampal volume of human subjects has been found to correlate with the number of major stressful events they experienced over their whole life.We have established that hippocampal biomarkers are closely associated with the enduring affective states of mammals, and may even be involved in modulating these affective states.However, processes other than affective states can also influence these hippocampal biomarkers.To be able to interpret a hippocampal biomarker in terms of cumulative affective experience it is therefore necessary to control for these potential confounding variables.Hippocampal biomarkers are known to vary for various reasons unrelated to affective state including age, sex, total brain size, genotype, and the non-affective component of experiences.Some of the microscopic hippocampal biomarkers are also known to change with the acute affective state of an individual.These various potential confounding factors need to be taken into account when using hippocampal biomarkers to assess cumulative affective experience.It is important to be aware of these confounding variables, but generally, they can be eliminated by good experimental design and/or statistical analysis.For instance, one can choose to study individuals of the same age or same sex.Age and sex can also be matched between groups that are to be compared.Genotype can be controlled for by comparing large groups of individuals.For markers that can be quantified in vivo, the effect of genotype and sex can also be eliminated by using a longitudinal design and studying within-subject effects.Factors including age, sex and total brain volume can also be controlled for by including them as covariates in the statistical analyses.As described previously, the hippocampus is anatomically and functionally divided along its longitudinal axis, with the anterior/ventral part being more involved in affective states and the posterior/dorsal part more involved in learning and memory.This functional segregation does not seem to be limited to acute affective states but extends to enduring ones.For instance, a recent study showed that granule cell activity in the anterior/ventral dentate gyrus has anxiogenic effects, and that new neurons in the anterior/ventral dentate gyrus inhibit this activity, conveying more stress resilience to the animals.Even if this spatial segregation is not absolute and some counter-examples have been described, it should be possible to reduce the probability that non-affective changes affected hippocampal biomarkers by focusing specifically on the anterior/ventral hippocampus.Measures focusing on the anterior/ventral part of the hippocampus should be more sensitive than those applied to the whole structure.Indeed, an increase in a hippocampal biomarker in the anterior/ventral part can sometimes be accompanied by a decrease in the posterior/dorsal parts of the hippocampus; in such circumstances, a whole hippocampal volume approach is likely to lead to false-negative results.A complementary approach for excluding non-affective confounders consists of combining the hippocampal biomarker with another marker of affective state.For instance, cumulative measurement of corticosteroids could be used.Although corticosteroid levels are believed to be more sensitive to arousal than affective valence, a change in valence is usually associated with a change in arousal, independently of the direction of the change in arousal: mood-deteriorating chronic stress, and mood-improving physical activity, sexual behavior and cage enrichment are usually associated with an increase in corticosteroid levels, whereas mood-improving mindfulness meditation is associated with a decrease.In contrast, corticosteroid levels are not expected to change due to learning and memory processes alone.Therefore, if a change in corticosterone levels is found to accompany a change in a hippocampal biomarker this should rule out the hypothesis that the latter change is only due to a change in non-affective cognitive processes.Acute stressors can also decrease neurogenesis and alter the structural characteristics of mature neuronal cell bodies.The microscopic hippocampal biomarkers can only be taken post-mortem and thus require prior euthanasia.Even when the animal is killed in the most humane way possible, the event might still induce some acute stress and thus potentially impact the hippocampal biomarkers.Comparing groups exposed to the same euthanasia protocol should control for this potential confounding factor.In the specific case of neurogenesis, quantifying markers of late stages of neural differentiation should also eliminate this potential confound.Macroscopic neuroimaging in-vivo hippocampal biomarkers require anesthesia or head restraint, two procedures that could potentially induce some acute stress.So far, macroscopic hippocampal changes have never been observed after acute stress, probably because the scale of the changes occurring after acute stress are too small to be detected with this approach.However, with technical improvement, it might become a problem for longitudinal designs in the future, or even today if measurements are taken at a high frequency, in which case the stress could become chronic.In this case, the number of measurements should be included in the statistical models to control for potential stress induced by the measurements themselves.Hippocampal biomarkers are closely associated with enduring affective states in various species of mammals.They co-vary with a wide range of events inducing positive and negative affective states.There is some evidence to suggest that hippocampal biomarkers track the enduring affective states of mammals taking into account their individual responses, rather than the events.Hippocampal biomarkers have been shown to reflect the accumulation of positive and negative affective experiences over long periods of time and possibly the whole life of individuals.This evidence is strong enough to start using these biomarkers as indicators of cumulative affective experience in the context of animal welfare.Only wider use of such biomarkers will allow us to determine the limits of their usefulness as welfare indicators.The hippocampus is an evolutionarily conserved region, and homologues have been described in all vertebrate lineages.The stress response system, including the HPA axis is also highly conserved in vertebrates.This opens the possibility that the hippocampal biomarkers described above might also be an indicator of cumulative affective experience in non-mammalian vertebrate species.In birds, adult neurogenesis takes place in numerous brain regions throughout life, including the hippocampus.The role of the avian hippocampus in learning and memory is well established and new-born neurons are suspected to play an important role in this process.A potential role of the hippocampus and its new-born neurons in emotional regulation is only starting to emerge.As in mammals, the avian hippocampus has high expression levels of glucocorticoid receptors and the density of these receptors, especially mineralocorticoid receptors, is regulated by stress.Several studies have shown a reduction of avian hippocampal neurogenesis and/or volume in potentially stressful situations and an increase with enrichment.However, in all these cases, the authors have either favored or been unable to exclude the possibility that the effects were due to a change in spatial memory abilities.Recently, food restriction was found to be associated with reduced hippocampal neurogenesis and chronically elevated corticosterone levels in chickens.In this paradigm, changes are unlikely to be driven by a change in spatial memory.Nevertheless, there is a clear need for more controlled experiments testing the specific role of positive and negative enduring affective states on the avian hippocampus volume and neurogenesis.In fish, neurogenesis takes place in numerous parts of the brain, one of them being homologous to the mammalian hippocampus.Due to this ubiquity, neurogenesis has usually been assessed in the whole brain, without differentiating the results according to brain regions.As in mammals, brain neurogenesis in fish seems to respond in opposite directions to positive and negative affective experiences.Down-regulation of neurogenesis by stress has been shown in various species of fish, using different stressors.In contrast, environmental enrichment was found to increase brain cell proliferation in electric fish and zebrafish.Dose-dependency has also been found, with cell proliferation correlating with the predation or social pressure.However, teleost fish have a very high rate of adult neurogenesis compared to mammals, and whether a neurogenesis marker has the capacity to integrate experiences over long periods of time is currently unknown.We currently cannot interpret absolute values of any hippocampal biomarker, because we lack quantitative definitions of what constitutes good or bad cumulative affective experience.Consequently, only relative measures can be interpreted.This problem is common to any welfare indicator.The main potential of these hippocampal biomarkers we envisage for the near future is thus to compare the effects of different housing conditions or husbandry and experimental procedures on the cumulative experience of animals, rather than assessing the absolute cumulative affective experience of individual subjects.Macroscopic hippocampal biomarkers can be measured in-vivo, allowing repeated measures on the same animals.In-vivo measurements require access to magnetic resonance imaging facilities with strong magnets.While such equipment is progressively becoming widespread in academic and industrial biomedical settings, it is usually not available in farm settings.Macroscopic markers thus have the potential to be used as a research tool, but not as a practical technique for on-site welfare assessment.Microscopic hippocampal biomarkers need to be taken post-mortem; consequently, only between-subject designs are possible.However, the microscopic biomarkers do not require expensive equipment in proximity to the animals, as brains can be collected when animals are slaughtered and processed elsewhere.Their application field thus seems wider compared to macroscopic biomarkers, although still limited.Hippocampal biomarkers do not require much time with the animal compared with existing behavioral markers of mood, which can be convenient when access to the animals is time limited.They do, however, require intensive data processing and technical skills.On-going studies are trying to validate innovative ways to assess neurogenesis rate by quantifying messenger RNA of genes involved in the process using quantitative PCR rather than immuno-histochemical methods to detect proteins.Such a technique would speed up the data processing and should facilitate the implementation of this approach.Meanwhile, we envisage hippocampal biomarkers being used by a small number of experts who can provide information useful for a large number of stakeholders, for instance by comparing the welfare impact of different protocols involving multiple events.The validation of hippocampal biomarkers of cumulative affective experience mainly relies on data from human and non-human primates and from rodents.Considering how conserved the biology of the hippocampus and the HPA axis is among mammals, however, we expect these biomarkers to be valid in any mammalian species.The macroscopic biomarkers depend on two main microscopic underlying mechanisms, neurogenesis and structural plasticity of mature neuronal cell bodies.It is possible that the respective contribution of the two microscopic mechanisms varies between mammalian species.Since the two microscopic biomarkers seem to have the same properties, this variation does not matter when one uses the macroscopic biomarkers.However, we would advise any researcher interested in using one of the microscopic biomarkers in a new species to first verify that their chosen marker is sensitive to a validated manipulation of affective state.This review of recent findings in stress biology and psychiatry suggests that various mammalian structural hippocampal biomarkers have criterion, construct and content validity for assessing the cumulative affective experience of individuals.These hippocampal biomarkers seem to offer a promising objective method to identify which husbandry conditions or experimental procedures induce a deterioration or amelioration of the cumulative affective experience of captive mammals, and to test the efficacy of any attempted refinements.The in-vivo biomarkers also potentially provide an opportunity to better define humane end-points, hence decreasing potential animal suffering.In-vivo biomarkers could also play a role in assessing the quality of life of humans unable to self-report their well-being.More data are required to validate the in-vivo and ex-vivo biomarkers in non-mammalian species.We hope that this analysis will motivate welfare researchers, neuroscientists and clinicians to explore the potential of these new biomarkers.
Progress in improving the welfare of captive animals has been hindered by a lack of objective indicators to assess the quality of lifetime experience, often called cumulative affective experience. Recent developments in stress biology and psychiatry have shed new light on the role of the mammalian hippocampus in affective processes. Here we review these findings and argue that structural hippocampal biomarkers demonstrate criterion, construct and content validity as indicators of cumulative affective experience in mammals. We also briefly review emerging findings in birds and fish, which have promising implications for applying the hippocampal approach to these taxa, but require further validation. We hope that this review will motivate welfare researchers and neuroscientists to explore the potential of hippocampal biomarkers of cumulative affective experience.
476
Photoluminescence and photocatalytic properties of novel Bi2O3:Sm3+ nanophosphor
Lanthanide ions doped nanophosphors have gained massive attention owing to their potential applications in various fields ranging from display , solar cells , bio-imaging , solid state lasers , remote photo activation , temperature sensors and drug release .Furthermore, NPs should possess superior physicochemical characteristics, such as long lifetimes, large anti-Stokes shifts, high penetration depth, low toxicity, as well as high resistance to photo bleaching .Bismuth is the only nontoxic heavy metal that can easily be purified in large quantities .The semiconductors such as Bi2MoO6, BiOX, BiVO4 and Bi2O3 have a high refractive index and excellent properties for visible light absorb ion, photoluminescence, dielectric permittivity, photoconductivity, large oxygen ion conductivity, and, noteworthy, for photocatalytic activity .At present, the photocatalysis technology has been anticipated to be a perfect “green” technology by the usage of solar energy in many fields such as, water-splitting , solar cell , water and air purification, organic waste degradation , CO2 reduction etc.The decomposition of dye contaminants in contaminated water, as a branch of photocatalysis, has attracted great attention.To date, with the exception of intensive research on conventional photocatalysts such as TiO2, ZnO, ZrO2 and other semiconductors with a wide band gap , the finding out of new photocatalysts with sturdy degradation abilities has become additionally important.Thus, we can consider Bi2O3 as a suitable host material, which is having all these features.Bismuth oxide is a semiconductor with attractive optical and electronic properties.Because of these properties, Bi2O3 has become an important material for several applications such as fuel cells , photocatalysts , gas sensors , and electronic components .Another significant characteristic of Bi2O3 is its polymorphism, which results in 5 polymorphic forms with different structures and properties , among them monoclinic α which is stable at room temperature and face-centered cubic δ that is stable at high temperature.There are various methods available for the synthesis of Bi2O3 nanophosphors viz., sonochemical, microwave irradiation, hydrothermal, chemical vapour deposition, micro-emulsion, surfactant thermal strategy, sol–gel approach, solution combustion and electro-spinning .In this work we report the synthesis of Bi2−xO3:Smx NPs via a simple low temperature solution combustion method.Compared with the conventional methods adopted for synthesis, the solution combustion method is advantageous in view of its low temperature and reduced time consumption which result in a high degree of crystallinity and homogeneity.The synthesised nanophosphor is characterized by PXRD and DRS. The effect of Sm3+ doping on the photoluminescence properties were studied in detail for their possible usage in display applications.At room temperature, the experiment was conducted in a reactor by utilizing a 125 W mercury vapour lamp as the UV light source.Using Acid red dye 88 as a model dye, the UV light photocatalytic activities of Bi2O3:Sm3+ NPs were evaluated.In this experiment, 30 mg of synthesized Bi2O3:Sm3+ NPs was dissolved completely into 10 ppm of AR-88 dye solution and stirred continuously to form a uniform solution.At each 15 min, 5 ml of the dye solution was inhibited and tested by a UV–Vis spectrophotometer by means of the typical adsorption band at 510 nm after centrifugation for the computation of the disintegration of dye .Crystal morphology of the synthesised NPs was determined by PXRD using X-ray diffractometer.Photoluminescence studies are made using Horiba Spectroflourimeter at Room Temperature.Fluor Essence™ software is used for spectral analysis.DRS studies of the samples were performed using Shimadzu UV-2600 in the range 200–800 nm.For Coordination number CN equal to 6, the radius of the host cation Rh is 1.03 Å, and the radius of the doped ion Rd is 0.958 Å.The calculated Dr is found to be 6.99% .Direct or indirect transitions are “allowed” transitions, if the momentum matrix element characterizing the transition is different from zero.This means that the transition can hold for sure if sufficient energy is given to the particle involved in the process.Direct or indirect transitions are “forbidden” transitions, if the momentum matrix element characterizing the transition is equal to zero.The transition cannot hold even if sufficient energy is given.However, a forbidden transition can sometimes become allowed.Sometimes a transition can be forbidden in first order but it becomes allowed in second order .Fig. 4 shows the excitation spectra of Bi2O3:Sm3+ NPs for 3, 5 and 7 mol%.The spectra were taken in the range of 360 nm–500 nm and exhibit bands at 365 nm, at 395 nm, at 418 nm, at 448 nm, at 465 nm and at 488 nm which are attributed to the 4f-4f transition of Sm3+ .Among these, the prominent transition at 465 nm was taken to explicate the emission spectra of the NPs.Fig. 5 shows the emission spectra of Bi2−xO3:Smx calcined at 600 °C excited under 465 nm.The spectra consist of four typical transition emission bands centered at 565 nm, 616 nm, 653 nm and 713 nm which are due to 4G5/2 → 6H5/2, 4G5/2 → 6H7/2, 4G5/2 → 6H9/2 and 4G5/2 → 6H11/2 respectively.Actually at excitation, the doped ions are excited to the higher energy state 4H9/2 from which they relax non-radiatively to the metastable state 4G5/2 through the 4F7/2, 4G7/2, and 4F3/2 levels.But 4H9/2 and 4G5/2 correspond to very close and fast non-radiative relaxations.So the spectra will have the four transition bands from 4G5/2.Among all the emitted transitions, 4G5/2 → 6H7/2 is the most prominent one with strong orange emission which is partly magnetic dipole and partly electric dipole.4G5/2 → 6H9/2 is purely electric dipole and in this study the intensity of the electric dipole transition is less compared to that of the magnetic dipole one, indicating the symmetry behaviour of Sm3+ ions in the host Bi2O3 .The variation of the PL intensity with respect to the Sm3+ dopant concentration is shown in Fig. 6.The PL intensity at 616 nm emission increases up to 5 mol% with the increase of Sm3+ content and, subsequently, it decreases owing to concentration quenching.The energy of the phosphor is lost due to non-radiative transitions by the incorporation of Sm3+ in the host or Sm3+-Sm3+ interaction when excited through vacancies.Quantum efficiency is an important parameter which determines the efficiency of nanophosphors for the applications of display devices.The electric-dipole and magnetic-dipole transitions are generally used in the investigation of rare earth ions doped luminescent materials.However, it is challenging to calculate the J–O intensity Ωt parameters for powder materials because the absorption spectra of powder materials can hardly be recorded.Table 2 gives the results of J–O intensity parameters and radiative properties of Bi2O3:Sm3+ nanophosphors that are calculated from the emission spectra.From the results it is clear that the Ω2 and Ω4 values are comparatively high due to the fact that the samples generally possess higher fractions of the rare earth ions on the surface of the nano crystals compared to the bulk counterparts .The parameter Ω2 is related to the short range impact in the vicinity of the rare earth Sm3+ ion and Ω4 is related to the long range impact.AR and τr were calculated from the emission spectra.The quantum efficiency is calculated with equation and found to be equal to 74.8% as shown in Table 2.An increase in quantum efficiency indicates a better applicability for display devices.It was observed that 4G5/2 → 6H7/2 transition of Sm3+ doped Bi2O3 NPs dominates the intensity emitted by the NPs in the emission spectra.The results infer that the current NPs can be utilized for display devices .“Commission International de i’Eclairage 1931 standards” were used to calculate the colour coordinates of Bi2−xO3:Smx from the emission spectra.In the colour space, coordinates are used to specify the colour quality and to evaluate the phosphors performance.These coordinates are the most prominent parameters.Fig. 7 shows the CIE 1931 chromaticity diagram for Bi2−xO3:Smx NPs excited at 365 nm and 465 nm.The CIE colour coordinates so calculated for Bi2−xO3:Smx are summarized in Fig. 7.It is clear that all the samples fall into the scope of orange red light emission.Fig. 7 shows CCT of Bi2−xO3:Smx and the average value was found to be 1758 K .Hence, it is obvious that the NPs can be used as an Orange red light source to meet the needs of the illustrated applications.Acid Red-88 is an azo dye.Due to its intense colour, Ar-88 was used to dye cotton textiles red and used for Photocatalytic studies.The PCA of Bi2−xO3:Smx were analysed for the decolourization of AR-88 in aqueous solution under UV light irradiation for a time duration of 60 min.The UV visible absorption spectra of the dye for various concentrations of Bi2−xO3:Smx are shown in Fig. 8.To know about the response kinetics of AR-88 Dye decolourization, the Langmuir–Hinshelwood model was adopted which follows the equation, ln = kt + a, where, k is the reaction rate constant, C0 the preliminary attention of AR-88, C the attention of AR-88 on the response time t .Fig. 9 shows the plot of ln photo decolourization of all catalysts Bi2O3:Sm3+ under UV light irradiation.As the doping concentration increases, the photo decolourization efficiency decreases and after 60 min irradiation it was found that the photo decolourization efficiency was 98.57% which is the maximum for 7 mol%.This might be due to the fact that at 7 mol%, Sm3+ ions on the host Bi2O3 behave as electron trapper to detach the electron–hole pairs which is much needed for PCA.At other molar concentrations, the catalyst may behave as recombination centres and this leads to less PCA efficiency.The present Bi2O3:Sm3+ nanophosphors were prepared by a solution combustion method.The crystallite size was found to be in the range 13–30 nm.The phosphors upon exciting at comparably low energy of 465 nm, emit orange colour with all characteristic transitions of Sm3+ ions.CCT of 1758 K shows that the phosphors are potential materials for warm white light emitting display devices.Further, it shows an excellent photocatalytic activity which proofs the multi functionality of the prepared nanophosphors.
The current work involves studies of the synthesis, characterization and photoluminescence for Sm3+ (1–11 mol%) doped Bi2O3 nanophosphors (NPs) by a solution combustion method. The average particle size was determined using powder X-ray diffraction (PXRD) and found to be in the range of 13–30 nm. The Kubelka–Munk (K–M) function was used to assess the energy gap of Sm3+ doped Bi2O3 nanophosphors which was found to be 2.92–2.96 eV. From the Emission spectra, the Judd–Ofelt parameters (Ω2 and Ω4), the transition probabilities (AT), the quantum efficiency (η), the luminescence lifetime (τr), the colour chromaticity coordinates (CIE) and the correlated colour temperature (CCT) values were estimated and discussed in detail. The CIE chromaticity co-ordinates were close to the NTSC (National Television Standard Committee) standard value of Orange emission. Using the Langmuir–Hinshelwood model and Acid Red-88, the photocatalytic activity results showed that Bi2O3:Sm3+ NPs are potential materials for the development of an efficient photocatalyst for environmental remediation. The obtained results prove that the Bi2O3:Sm3+ nanophosphors synthesised by this method can potentially be used in solid state displays and as a photocatalyst.
477
Dataset from chemical gas sensor array in turbulent wind tunnel
Conductometric sensing principles have been widely studied in several types of gas sensing schemes because they are stable in many environments and within a wide temperature range, sensitive to many analytes at a wide variety of concentrations, respond quickly and reversibly, and are inexpensive, while performing reasonably well in discriminating chemical analytes .Although they have been predominantly used in isolated settings that include measurement chambers, their high sensitivity and rapid response to a wide variety of volatiles distinguishes MOX sensors as suitable chemo-transducers for ambient conditions.We designed a general purpose chemical sensing platform containing nine portable chemo-sensory modules, each endowed with eight commercialized metal oxide gas sensors, provided by Figaro Inc., to detect analytes and follow the changes of their concentration in a wind tunnel facility.The sensor׳s response magnitude to the chemical analyte is signaled by a change in the electrical conductivity of the sensor׳s film, which is tightly correlated with the analyte concentration present on its surface.Hence, changes in the analyte concentration are reflected in the sensor׳s response in real-time and are the origin of the temporal resolution.The active surface chemistry is a decisive factor in the sensitivity and the selectivity of the sensing elements.In particular, the sensing layers used in each of our sensory modules represent six different sensitive surfaces, as listed in Table 1.Sensitivity to chemicals and nominal resistance may change significantly among MOX gas sensors, even for sensors of the same type .Hence, we included some replicas of the same sensor type in the arrays, enabling further studies on sensors׳ reproducibility and development of algorithms to alleviate sensors׳ variability.On the other hand, the operating temperature of the sensors in our array is adjustable by applying a voltage to the built-in heater of each sensor.The sensors׳ operating temperature affects all aspects of the sensor response, including selectivity, sensitivity and response time of the sensor to volatiles .In our particular chemical sensing platform, each portable chemosensory module is integrated with a customized sensor controller implemented with a microprocessor MSP430F247.This controller enables continuous data collection from the eight chemical sensors through a 12-bit resolution analog-to-digital converter at a sampling rate of 100 Hz, the control of the sensor heater temperature by means of 10 ms period and 6 V amplitude Pulse-Width-Modulated driving signals, and the two-way communication with a computer to acquire sensors׳ signals and control sensors׳ heaters.In particular, we set the operating temperature of the sensors at 5 different levels, controlled by the voltage applied in the heater: from 4 V to 6 V with an resolution of 0.5 V.We constructed a 2.5 m×1.2 m×0.4 m wind tunnel, a research test-bed facility endowed with a computer-supervised mass flow controller system.The resulting wind tunnel operates in a propulsion open-cycle mode, by continuously drawing external turbulent air throughout the tunnel and exhausting it back to the outside, thereby creating a relatively less-turbulent airflow moving downstream towards the end of the test field.This operational mode is particularly crucial for applications that require injecting chemical poisonous agents or explosive mixtures because it prevents saturation.The gas source in the wind tunnel was controlled by a set of mass flow controllers that, along with calibrated pressurized gas cylinders provided by Airgas Inc., provided the chemical substances of interest at selected concentrations.To create various distinct artificial airflows in the wind tunnel, we utilize a multiple-step motor-driven exhaust fan located inside the wind tunnel at the outlet of the test section rotating at three different constant rotational speeds: 1500 rpm, 3900 rpm, 5500 rpm.We estimated the induced wind speed by means of two anemometers.The mean wind speed in the axis of the wind tunnel increased with the rotational speed: 0.1 m/s, 0.21 m/s, and 0.34 m/s respectively.The wind tunnel was used to collect time series from sensor arrays placed at different locations.The nine detection units were placed in six lines normal to the wind flow.Each line included 9 landmarks evenly distributed along the line to complete a grid of 54 evaluation landmarks.Each of the nine detection units was always placed on the same location of each sensing line, i.e. each unit was always placed at the same distance with respect to the axis of the gas plume.Finally, to measure the ambient temperature and humidity during the entire experiment in the wind tunnel we utilized the sensor SHT15.We compiled a very extensive dataset utilizing nine portable sensor array modules – each endowed with eight metal oxide gas sensors – positioned at six different line locations normal to the wind direction, creating thereby a total number of 54 measurement locations.In particular, our dataset consists of 10 chemical analyte species.Table 2 shows the entire list of chemical analyte as well as their nominal concentration values at the outlet of the gas source.To construct the dataset, we adopted the following procedure.First, we positioned our chemo-sensory platform, ie the 9 sensing units, in one of the six fixed line positions indicated in the wind tunnel, and set the chemical sensors to one of the predefined surface operating temperatures.One of the predefined airflows was then individually induced into the wind tunnel by the exhaust fan, generating thereby the turbulent airflow within the test section of the wind tunnel.This stage constituted a preliminary phase that allowed to reach a quasi-stationary situation and to measure the baseline of the sensor responses for 20 s before the chemical analyte was released.We then randomly selected one of the ten described chemical volatiles and released it into the tunnel at the source for three minutes.The chemical analyte circulated throughout the wind tunnel while recording the generated sensor time series.Note that the concentration reported in Table 2 represents only the concentration at the outlet of the gas source.Concentration disperses as the generated gas plume spreads out along the wind tunnel.After that step, the chemical analyte was removed and the test section was ventilated utilizing clean air circulating through the sampling setting at the same wind speed for another minute.Fig. 2 shows the typical response of the sensors after a complete measurement was recorded.This measurement procedure was reproduced exactly for each gas category exposure, landmark location in the wind tunnel, operating temperature, airflow velocity, and repetitions in a random order up until all combinations were covered.The resulting dataset in the end comprises 18,000 72-dimensional time recordings.Hence, the total number of measurements is distributed as follows: 3 different wind speeds, 5 different sensors׳ temperatures, 10 gases, 6 locations in the wind tunnel, and 20 replicas.Finally, note that although different induced wind speeds strongly influence the structure and spatial distribution of the generated gas plumes – in the sense that slow fan speeds induce less stable patterns of the air flow direction, resulting in wider gas plumes, whereas faster velocities in the wind generate narrower gas plumes – there is no symmetry in the spatial distribution of the plume with respect to the main axis.A plume demonstrating a perfect symmetry in real environmental conditions is rare due to the existent non-symmetry of the volume enclosing the field, the inhomogeneous temperature in the ambient, and the variability of the flow direction.
The dataset includes the acquired time series of a chemical detection platform exposed to different gas conditions in a turbulent wind tunnel. The chemo-sensory elements were sampling directly the environment. In contrast to traditional approaches that include measurement chambers, open sampling systems are sensitive to dispersion mechanisms of gaseous chemical analytes, namely diffusion, turbulence, and advection, making the identification and monitoring of chemical substances more challenging. The sensing platform included 72 metal-oxide gas sensors that were positioned at 6 different locations of the wind tunnel. At each location. , 10 distinct chemical gases were released in the wind tunnel. , the sensors were evaluated at 5 different operating temperatures. , and 3 different wind speeds were generated in the wind tunnel to induce different levels of turbulence. Moreover. , each configuration was repeated 20 times. , yielding a dataset of 18,000 measurements. The dataset was collected over a period of 16 months. The data is related to "On the performance of gas sensor arrays in open sampling systems using Inhibitory Support Vector Machines", by Vergara et al.[1].The dataset can be accessed publicly at the UCI repository upon citation of [1]: http://archive.ics.uci.edu/ml/datasets/Gas+sensor+arrays+in+open+sampling+settings.
478
CFD simulation of cross-ventilation in buildings using rooftop wind-catchers: Impact of outlet openings
Providing solutions for effective natural ventilation in buildings is a topic that receives increasing attention from building designers and the research community .On the one hand, the fresh air supply from natural ventilation forms a sustainable alternative for more energy-intensive types of mechanical ventilation .On the other hand, it can serve as a strategy for improving thermal comfort conditions by harnessing the ventilative cooling potential of air in the ambient environment .Wind catchers are vertical building-integrated structures that induce fresh air into indoor spaces by taking advantage of the pressure difference over the building and across the device openings.Depending on the number of its opening and variable wind flow field that surrounds the building, a wind catcher can act as either air supply or extract system .In addition, the wind-induced pressure differences can result in significant flow velocity, which enables the potential for energy harvesting .Previous research has identified wind catchers as a high-potential technology for enabling natural ventilation in buildings .An overview of the development of wind catchers is provided by Hughes et al. , Saadatian et al. , Jomehzadeh et al. and Rezaeian et al. .Even though the usefulness of wind catchers is known for a long time, the concept is now subject of renewed interest as an environmentally friendly solution for meeting the requirements of modern-day building design.A detailed review of the literature indicates that several studies have been performed to analyze different aspects of cross-ventilation using wind catchers.The focus of these studies has been mainly on:wind-catcher geometry, such as horizontal cross-section , number and type of openings on the tower of the wind catcher , the height of the tower , and the design of components that control airflow ;,wind-driven and buoyancy-driven ventilation assessment of ancient and commercial modern wind-catchers;,the feasibility of integrating wind-catchers with other passive ventilation strategies such as dome roof and solar chimneys in buildings ;,the feasibility of integrating evaporative cooling systems and heat exchangers in wind catchers ;,the capability of wind catchers in enhancing indoor and outdoor ventilation in urban areas .As many studies have shown, computational fluid dynamics can be a very valuable tool for analyzing the working principles of wind catchers.Table 1 summarized the results of a detailed review of the literature on CFD simulations of cross-ventilation using wind catchers.The table presents the type of wind catcher, the turbulence modeling approach, turbulence model implemented, the type of inlet velocity profile), whether validation was performed, the main objective of the study and ventilation performance indicator used.It can be seen that:A vast majority of CFD studies focus on geometrical characteristics of wind catchers, buoyancy-driven ventilation and the cooling performance of evaporative cooling systems and heat exchangers integrated into wind catchers.The impact of building geometry on the ventilation performance of wind catchers is limited to the roof geometry.Earlier research on wind-driven cross-ventilation in buildings has shown the significant importance of the size and position of the inlet and outlet openings on the characteristics of the flow inside the building, which can strongly influence the ventilation performance and the indoor air quality .However, this has not yet been investigated for rooftop wind-catchers, where the flow pattern inside the wind catcher and the integrated building is very complicated.3D steady Reynolds-Averaged Navier-Stokes has been the most common approach for CFD simulations of wind catchers.The standard k-ε and the RNG k-ε turbulence model have been widely implemented.Nevertheless, Montazeri et al. showed the superior performance of the realizable k-ε for cross-ventilation for wind catchers.It should be noted that Large-Eddy Simulation can provide more accurate descriptions of the mean and instantaneous flow field around bluff bodies than steady RANS, at the expense of much larger requirements in terms of computational resources .Nevertheless, LES simulations of cross-ventilation for buildings with wind catchers is very scarce and is limited to the study by Kobayashi et al. in which the performance of RANS and LES for cross-ventilation in buildings considering the impact of building surroundings has been compared.Most of CFD studies have analyzed the ventilation performance of wind catcher by focusing on the induced airflow rate as the ventilation performance indicator, and disregard the indoor air quality inside the ventilated building.Both aspects, however, need to be taken into account simultaneously to assess the ventilation performance of wind catchers.Indoor air quality assessment is scarce and limited to the studies by Liu et al. and Calautit et al. .Many of these studies have been performed in a uniform approach-flow mean wind speed.However, a complete understanding of the ventilation performance of wind catchers can be achieved with incident atmospheric boundary layer flow profiles of mean wind speed, turbulent kinetic energy, and turbulence dissipation rate.Therefore, this paper investigates the impact of outlet openings on basic flow characteristics of cross-ventilation using wind-catchers integrated into a single-zone isolated building in a neutral atmospheric boundary layer.High-resolution coupled 3D steady RANS CFD simulations of cross-ventilation are performed for 23 cases with different size and location of outlet openings.The evaluation is based on three ventilation performance indicators: induced airflowrate, age of air, and air change efficiency.The CFD simulations are validated based on wind-tunnel measurements of mean surface static pressures and mean indoor air speed for a one-sided wind-catcher by Montazeri and Azizian .The results of this study can assist building engineers with integrating wind catchers in buildings, and allows product developers to make informed decisions about how wind catcher openings, building openings and other innovative components should be applied to result in maximum performance of the wind catcher and indoor air quality.This paper contains six sections.In Section 2, the experiments by Montazeri and Azizian and the validation study are briefly outlined.Section 3 describes the computational settings and parameters for the CFD simulations.Section 4 presents the CFD results.Finally, discussion and conclusions are provided.Wind-tunnel measurements of surface static pressure and indoor air speed for a one-sided wind-catcher were conducted by Montazeri and Azizian .The experiments were performed in an open-circuit wind-tunnel with uniform approach-flow conditions.The test section of the wind tunnel was 3.6 m long with a cross-section of 0.46 × 0.46 m2.A 1:40 scale model of an ancient one-sided wind-catcher was employed.The wind-catcher model was connected to the reduced-scale model of a single-zone building, which was positioned outside the test section to keep the blockage ratio about 5%.The building model had a window opening with a surface area equal to 110% of the one of the wind-catcher opening.Upstream static and dynamic pressures were measured with a Pitot tube mounted 0.165 m upstream of the model and at the height of 0.12 m above the test-section floor.The upstream wind speed was 20 m/s, yielding a Reynolds number of 198,000 based on the wind-catcher height, which is well above the critical value of 11,000 that is often used to indicate Reynolds number independent flow .At this wind speed, the pressure difference between a point inside the wind tunnel and its outer area was about 23 Pa.The measurements were performed for different approach flow wind directions.Mean surface pressures were measured along three vertical lines on the three internal surfaces of the wind-catcher model.The measurement lines were positioned in the middle of the surfaces and 23 holes were drilled at equidistant points along them.In the remainder of this paper, we will refer to these vertical lines as edge lines and center line.Several Pitot and static tubes were used to measure air speed inside the wind catcher.For θWT = 0°–75°, twenty Pitot and five static tubes were installed vertically at the bottom of the model, while the tip of the tubes was positioned about 0.020 m above the building ceiling.For the wind directions between 75° and 180°, when the wind catcher acts as a suction device, ten Pitot and five static tubes were situated at the top of the model in a horizontal plane at z = 0.065 m.The Bernoulli equation was used to determine the air speed at the position of each Pitot tube.Note that the tubes might experience incoming flows that were not parallel to their center lines, which could lead to errors in the measured data.Additional measurements were carried out by Montazeri and Azizian to determine the yaw characteristics of the Pitot and static tubes.The results showed that the square-ended probes began to show errors near 18° of flow inclination.The uncertainty of the measured velocity using this method was within 10%.A coupled indoor-outdoor computational domain is constructed at reduced scale with a high level of detail.The computational model consists of the wind-catcher model, the building model, and the wind-tunnel test section.The upstream and downstream domain lengths are 5HWT and 10HWT, respectively, based on the best practice guidelines by Franke et al. and Tominaga et al. .The computational grid is generated with the aid of the pre-processor Gambit 2.4.6, resulting in a hybrid grid with 7,265,421 cells.The grid only consists of prismatic and hexahedral cells.The grid resolution resulted from a grid-sensitivity analysis,.Along the width and depth of the wind-catcher opening, 25 and 26 cells are used, respectively.A maximum stretching ratio of 1.2 controls the cells located in the immediate surroundings of the wind-catcher model.The minimum distance from the center point of the wall-adjacent cell to the wind-catcher walls is about 2 × 10−3 m.This corresponds to y* values between 35 and 92.The walls of the computational domain are modelled as no-slip walls with zero roughness height.The standard wall functions are applied.Zero static gauge pressure is applied at the outlet openings, i.e. the vertical plane downstream of the wind catcher and the window openingμ.The commercial CFD code Fluent 12.1 is used to perform the simulations .The 3D steady RANS equations are solved in combination with the realizable k-ε turbulence model by Shih et al. .This is in line with the sensitivity analysis of the CFD results to the turbulence models performed by Montazeri et al. .The sensitivity analysis was performed for six turbulence models: the standard k−ε model ; the realizable k−ε model ; the renormalization Group k−ε model ; the standard k−ω model , Shear-stress transport k−ω model and the Reynolds Stress Model .Two approach-flow wind directions are considered: θ = 0° and θ = 180°.The results show that of the six commonly used turbulence models, only the realizable k-ε model succeeds in reproducing both surface static pressure and indoor air speed at θ = 0°, while the standard k-ε and standard k-ω clearly fail in doing so and show the least good performance.At θ = 180°, the general agreement with the measurements is good to very good for all turbulence models for the surface static pressure.None of the turbulence models, however, can accurately predict the mean indoor air speed.The SIMPLE algorithm is used for pressure-velocity coupling, pressure interpolation is second order and second-order discretization schemes are used for both the convection terms and the viscous terms of the governing equations.Convergence is assumed to be obtained when all the scaled residuals leveled off and reached a minimum of 10−6 for x, y momentum, 10−5 for y momentum and 10−4 for k, ε and continuity.The simulations are performed for the wind-catcher opening facing the approach flow, i.e. θWT = 0°.Fig. 2a and b compare the CFD results and the wind-tunnel results of pressure coefficient, CP, along the vertical lines.The pressure coefficients are computed as CP =/ where P is the mean static pressure at the internal surfaces, P0 the reference static pressure and ρ = 1.225 kg/m3 the air density.As the upstream static pressure was measured only in an empty wind-tunnel, in the present study, we used the CFD result of static pressure at the point where the Pitot tube for the reference static pressure was mounted in the experiment.This is in line with the results of the sensitivity analysis by Montazeri and Blocken , for CFD validation studies in which the reference static pressure in the measurements is unknown.The results show that the vertical CP gradients are quite well reproduced.The general agreement between CFD results and wind-tunnel measurements is also good, especially for the points at the top and bottom of the wind catcher.For the points between z/HWT = 0.3 and 0.6, however, CFD tends to overestimate the Cp values.For the center line, the maximum absolute deviation from the measurements is about 0.06, which occurs near z/HWT = 0.4, while for the edge line, this increases to about 0.11 for the point near z/HWT = 0.5.Fig. 2c shows a comparison between the simulated and measured normalized air speed values at the position of the Pitot tubes in the measurements.The agreement between CFD and measurements is good, especially for the points with relatively high air speeds.The average deviation for all measurement points is about 7%, while the maximum deviation is about 21%.In this study, the simulations are performed for a single-zone isolated building with an integrated one-sided wind-catcher.The building has dimensions width × depth × height = 6 × 8 × 3 m3.The wind catcher is 1.5 m high with a horizontal cross-section area of 1 × 1 m2.The wind-catcher opening has width × height = 1 × 1 m2, facing the approach flow, i.e. θ = 0°.In this study, 23 cases are considered, which are different in size and/or type of their outlet openings.These building models can be classified into three groups:Reference building: the reference building has a single window opening with a surface area equal to the wind-catcher opening.The window is positioned in the middle of the leeward facade to enhance the efficiency of the system.It is worth noting that for small and medium-sized buildings, openings further apart perform more efficiently than those close to each other .The reference building is defined as a starting point for the investigation of outlet openings.The results of all cases will be compared with the results of the reference case.Buildings with different window opening sizes: 11 cases are considered.The ratio between the surface area of the window openings to the wind-catcher opening ranges from A/Ainlet = 0 to A/Ainlet = 2.0.In each case, the window is positioned in the middle of the leeward facade.Buildings with different types of outlet openings: 12 cases are taken in which 4 types of outlet openings are considered: a ceiling opening near the trailing edge of the roof with a surface area equal to the wind-catcher opening, i.e. A/Ainlet = 1.0; a secondary one-sided wind-catcher near the trailing edge of the roof with an opening with A/Ainlet = 1.0; a secondary one-sided wind-catcher with an opening with A/Ainlet = 1.0, which is positioned back to back close to the main wind-catcher; two identical one-sided wind-catchers with A/Ainlet = 1.0, one near the trailing edge of the roof and one next to the main wind-catcher.Different combinations of outlet openings are made using these four openings and two windows with A/Ainlet = 1.0 and A/Ainlet = 2.0.A computational model is made of the building and the integrated one-sided wind-catcher in a way that makes it possible to create all outlet openings, presented in Section 3.1.The upstream and downstream domain length is 5H = 22.5 m and 15H = 67.5 m, respectively.The resulting dimensions of the domain are W × D × H = 94 × 98 × 27 m3.The computational grid consists of 1,797,312 hexahedral cells.The grid is shown in Fig. 7.Along the width and height of the wind-catcher opening, 14 and 12 cells are used, respectively.This is 14 and 14 cells for the window opening of the reference building, as shown in Fig. 7e.The grid resolution resulted from a grid-sensitivity analysis that will be presented in Section 4.1 and is shown in Fig. 8.The minimum and maximum cell volumes in the domain are approximately 4.5 × 10−5 m3 and 6.5 × 101 m3, respectively.The distance from the center point of the wall adjacent cell to the wall, for the windward, leeward and roof of the building is 0.020 m, 0.020 m and 0.019 m, respectively.For the internal surfaces of the wind catcher and the building, this distance ranges from 0.020 m to 0.025 m.This is 0.016 m for the ground plane.This corresponds to y* values between 30 and 500.As standard wall functions are used in this study, these values ensure that the center point of the wall-adjacent cell is placed in the logarithmic layer.Standard wall functions are also used at the building surfaces but with zero roughness height ks = 0.Zero gauge static pressure is applied at the outlet plane.Symmetry conditions are applied at the top and lateral sides of the domain.The ambient temperature is assumed to be 20 °C.The solver settings are identical to those used in the validation study and reported in Section 2.4.Isothermal CFD simulations are performed with the 3D steady RANS equations and the realizable k–ε turbulence model for θ = 0°, i.e. the wind-catcher opening facing the approach flow.Convergence is assumed to be obtained when all the scaled residuals levelled off and reached a minimum of 10−6 for x, y momentum, 10−5 for z momentum and 10−7 for k, ε and continuity.All simulations are performed on an 8-core workstation with 24 GB of system memory.In this study, a grid-sensitivity analysis is carried out to reduce the discretization errors and the computational time.The analysis is performed for the reference building and is based on two additional grids; a coarser grid and a finer grid.An overall linear factor √2 is used for coarsening and refining the grid.The coarse and fine grid have 682,110 and 5,456,880 cells, respectively.The three grids are shown in Fig. 8.The simulations on the coarse, basic and fine grid require about 2, 3 and 5 CPU hours, respectively.The air velocity profiles along a vertical line and a horizontal line inside the building are compared in Fig. 9 for the three grids.The results show a limited dependence of the air velocity results on the grid resolution along the lines inside the building.Negligible grid-sensitivity is found for the other parts.In this case, the average deviation between the coarse and reference grid along the three lines is 1.7% while it is about 0.9% between the fine and reference grid.Therefore, the reference grid is retained for further analysis.Fig. 10 presents contours of the normalized wind speed and pressure coefficients in the vertical center plane.The figures indicate the main features of the flow: the separation zone on the roof, the wake behind the building, and flow separation and reattachment inside the wind catcher.The direct impingement of the flow onto the ceiling of the wind catcher yields a large stagnation area on this surface.The flow is subsequently bent downwards into the “tower” of the wind catcher where the maximum normalized wind speed reaches about V/Uref = 0.8.The flow also separates at the lower edge of the opening.Consequently, a recirculation zone emerges inside the tower, which may negatively influence the ventilation performance of the system.As shown by Montazeri and Azizian , reducing the size of this recirculation zone can significantly enhance the induced airflow rate.The jet is directed downwards immediately after passing through the tower and is decelerated until the opposite wall is reached where it impinges onto the floor.This leads to a considerable reduction in the jet momentum, but it still has sufficient force for the development of recirculation zones inside the building.The flow decelerates inside the building and accelerates again closer to the window.Fig. 10b shows the Cp distribution in the center plane.The wind-catcher opening experiences higher Cp values compared to those inside the building and at the window surface.In this case, the area-weighted average of the pressure coefficient at the wind-catcher opening surface and the window surface is 0.60 and 0.05, respectively.The volume-weighted average of the pressure coefficient inside the building is 0.27.A relatively high-pressure zone can be clearly seen in the stagnation area underneath the ceiling of the wind catcher.The pressure reduces along the tower and reaches the Cp value inside the building, where a relatively uniform distribution of Cp can be observed.A small area with a higher level of Cp can be seen underneath the wind catcher, where the jet impinges the floor.The distributions of the local mean age of air across two horizontal planes, located 1.30 m and 0.65 m above the floor, and two vertical planes, located 3 m and 1.5 m from the sidewall of the building are shown in Fig. 11.The lowest level of the local mean age of air is achieved on the center plane, where the flow is less affected by the recirculation zones inside the building.It should be noted that, unlike cross-ventilation configurations in which inlet and outlet openings are located on opposite building walls, in the present study a “stream tube” connecting the inlet and outlet is not formed .Fig. 12 presents the distribution of the normalized wind speed and the pressure coefficient in the center plane for three ratios of window opening to wind-catcher opening: Aoutlet/Ainlet = 0.2, Aoutlet/Ainlet = 1.0 and Aoutlet/Ainlet = 2.0.By enlarging the window, the local air speed values increase inside the wind catcher and the building space.Nevertheless, the flow pattern, except very close to the outlet, remains quite similar for all cases.In addition, it can be clearly observed that the internal static pressure decreases by increasing the window size.Profiles of the area-weighted average of the pressure coefficient at the wind-catcher opening, Cp,WC, and at the window, Cp,W, as a function of Aoutlet/Ainlet are provided in Fig. 13a. Variations in the spatially averaged internal pressure coefficient, Cp,i, as a function of Aoutlet/Ainlet is also shown in Fig. 13b.It can be observed that Cp,WC, Cp,W and Cp,i reduce monotonically by enlarging the window opening.The difference between Cp,WC and Cp,W increases as the outlet-to-inlet ratio increases from 0.2 to 1.6 and remains approximately constant for higher values of Aoutlet/Ainlet.For Aoutlet/Ainlet = 0.2, 1.0, 1.6 and 2.0, for example, the pressure difference is about 0.49, 0.55, o.58 and 0.58, respectively.Fig. 14 shows the profile of the normalized induced airflow rate as a function of the outlet-to-inlet ratio.By enlarging the window, while the area of the wind-catcher opening is constant, the induced airflow rate into the building increases.Note that the increase in the induced flow rate as a function of the unit window enlargement is reduced significantly by enlarging the window.For example, as the outlet-to-inlet ratio increases from 0.4 to 0.6, the induced flow rate rises by about 10.5%.This increment is, however, only 3.5% as the size of the outlet opening increases from 1.6 to 2.0.Table 3 presents the normalized airflow rate through the openings and the area-weighted average Cp at the openings.The following observations can be made:For cases with only one outlet opening, i.e. Case_O1, Case_O4 and Case_O7, the highest area-weighted average pressure difference between the inlet and outlet is achieved when a secondary wind-catcher is used back to back close to the main wind-catcher.Note that the flow in this region is dominated by the separation at the top and circulation in the wake of the wind-catcher.Consequently, the opening of the secondary wind-catcher experiences the lowest Cp values compared to the other outlet openings.For Case_O7, for example, the Cp difference between the inlet opening and the outlet opening is about 21% higher than that for Case_O4.This is about 13% and 34% comparing to Case_O1 and the reference case, respectively.For Aoutlet = Ainlet, the highest induced airflow rate is achieved for Case_O7.In this case, the induced airflow rate is the same as the reference case.The induced airflow rate increases by enlarging the total surface area of outlet openings, regardless of their position.For example, in case of using a secondary wind-catcher close to the trailing edge of the roof, Q/Qref is 92%, 118% and 124% for Aoutlet/Ainlet = 1.0, 2.0 and 3.0, respectively.For cases with outlet surface areas larger than the surface area of the inlet opening, Q/Qref is relatively insensitive to the type of outlet openings.For example, for Aoutlet = 2Ainlet, Q/Qref = 115%, 118%, 122%, 119% and 118% for Case_O2, Case_O5, Case_O8, Case_O10 and Case_W2, respectively.The results show that using two openings very close to each other will not increase the induced airflow.In addition, it leads to a considerable reduction in the indoor air quality inside the building.The negative impact of short-circuiting is expected to play an important role for multi-opening wind-catchers in which the openings are located very close to each other.Such analysis, therefore, is crucial for this type of wind catchers that have received much attention from building designers and the research community.In this study, a low-rise single-zone building is considered where the jet can reach the floor.Further investigations need to be performed for taller buildings where the jet is expected to dissolve before it reaches the floor.In a one-sided wind-catcher, the induced air stream normally enters the indoor space through an opening in the ceiling as a relatively high momentum air jet.This is comparable to the impinging jet ventilation, in which an air jet is discharged downwards at a relatively lower level.It was shown that this is more efficient than a displacement ventilation system .Further research is needed to investigate the possibility of using wind catchers as an IJV system.In this case, “draught” should be taken into account as the high air speed might occur in the occupied zone, especially underneath the wind-catcher opening.In this study, the simulations are performed for only one approaching wind direction θ = 0°.Further research needs to be performed to assess the ventilation performance of one-sided wind-catchers for different approaching wind directions.Earlier research has shown that the pressure difference causes a one-sided wind-catcher to perform as a suction system and retain some of its ventilation performance at different wind directions.The orifice equation is commonly used for cross-ventilation analysis.The accuracy of the approach needs to be evaluated for rooftop wind-catchers, where the flow inside the building is very complex.Note that it was shown that for “long” openings such as wind catchers the still-air discharge coefficients depends on Reynolds number .This paper presents a detailed evaluation of the impact of outlet openings on basic flow characteristics of cross-ventilation using wind-catchers integrated into a single-zone isolated building.The concept of age of air introduced by Sandberg is used to assess the indoor air quality.High-resolution coupled 3D steady RANS CFD simulations of cross-ventilation are performed for 23 building models with different size and position of the outlet openings.The simulations are performed for wind direction perpendicular to the wind-catcher opening.The evaluation is based on validation with wind-tunnel measurements of mean surface static pressures and mean indoor air speed for a one-sided wind-catcher.The following conclusions can be drawn:Impact of window size:For a given size of the wind catcher opening, the size of the window can significantly influence the induced airflow rate into the building.By enlarging the window, while the area of the wind-catcher opening is constant, the induced airflow rate into the buildings increases monotonically.This increase is more pronounced for cases with Aoutlet < Ainlet.The increase in the induced flow rate as a function of the unit window enlargement is reduced significantly by enlarging the window.Therefore, in case of using a window in combination with a one-sided wind-catcher, enlarging the window more than Aoutlet/Ainlet = 1.0 cannot be considered as a beneficial way to increase the induced airflow rate by the wind catcher.The size of the window, placed on the leeward wall of the building, has a negligible impact on the air change efficiency, i.e. indoor air quality.Impact of different types of outlet openings:For Aoutlet/Aref = 1.0, the highest induced airflow rate is achieved for the reference case, when a window is used in the middle of the leeward façade and for the case in which a secondary wind-catcher is placed back to back next to the main wind-catcher.For cases with outlet surface areas larger than the surface area of the inlet opening, the induced airflow is relatively insensitive to the type of outlet openings.By increasing the surface area of outlet openings, regardless of their positions, the induced airflow rate and the air change efficiency increase.For a given value of Aoutlet/Ainlet, using a secondary wind-catcher back to back next to the main wind-catcher leads to the highest local mean age of air values inside the building resulting in the lowest air change efficiency.It can be concluded that the combination one-sided wind-catcher and window, which is positioned on the leeward wall of the building, is superior over the other outlet openings tested in this study in terms of induced airflow rate and indoor air quality.
Cross-ventilation using rooftop wind-catchers is very complex as it is influenced by a wide range of interrelated factors including aerodynamic characteristics of the wind catcher, approach-flow conditions and building geometry. Earlier studies on wind-driven cross-ventilation in buildings have shown the significant impact of the geometry and position of openings on the flow and ventilation performance. However, this has not yet been investigated for cross-ventilation using wind catchers. This paper, therefore, presents a detailed evaluation of the impact of the outlet openings on the ventilation performance of a single-zone isolated building with a wind catcher. The evaluation is based on three ventilation performance indicators: (i) induced airflow rate, (ii) age of air, and (iii) air change efficiency. High-resolution coupled 3D steady RANS CFD simulations of cross-ventilation are performed for different sizes and types of outlet openings. The CFD simulations are validated based on wind-tunnel measurements. The results show that using outlet openings very close to the wind catcher will not increase the induced airflow, while it leads to a considerable reduction in the indoor air quality. A combination of one-sided wind-catcher and window is superior, while the use of two-sided wind-catchers leads to the lowest indoor air quality and air change efficiency.
479
Performance bonuses in the public sector: Winner-take-all prizes versus proportional payments to reduce child malnutrition in India
Prize contests and other performance incentives offer a promising approach to improving public service provision.Teachers, nurses and other service providers often have limited information about the success of their efforts, and existing arrangements may not reward them for significant achievements.Performance pay and bonuses can help align worker interests with beneficiary needs, and reveal information about how to improve in fields such as education and health."This study reports on a randomized trial of incentives offered to Anganwadi workers serving preschool children in daycare centers across the urban slums of Chandigarh, India, as part of the government's Integrated Child Development Services program.Each Anganwadi worker manages her own ICDS center, typically a single room, in which she is expected to provide a mid-day meal, daycare and some educational services for about 25 children from 3 to 6 years of age."A principal objective of the ICDS program is to help children avoid and recover from malnutrition, defined by the government of India as low weight for age, by complementing whatever food and care is provided by each child's own family.Workers are salaried civil servants, for whom disciplinary actions in the case of poor performance are generally ineffective, leading some states to introduce positive incentives for better outcomes.The incentives we offer aim to identify how ICDS managers can best recognize and reward Anganwadi workers for their otherwise neglected efforts, building on a series of previous experiments in Chandigarh and elsewhere."The specific study presented here compares two canonical types of incentives offered in addition to the workers' base salaries: winner-take-all contests in which workers compete for a fixed prize, and proportional rewards in which that amount is divided among workers in proportion to their share of total achievement.With winner-take-all prizes, reward is based on rank order and most workers receive nothing, whereas with proportional payments every increment of success is always rewarded and all competitors may receive something for even small improvements in performance.Our design aims to identify differences in how workers respond to the two types of incentives during the contest period when these rewards are offered, and also after they are withdrawn.A key feature of our trial is the use of identical information and payment budgets in the two treatment arms, so any differences in response are attributable to the way funds are distributed.This design complements a previous trial in this setting which compared piece-rate payments to a fixed bonus and a control arm in which workers received only their base salary.Those treatments all used the same information, but differed in amounts of money paid.Other trials differ in many dimensions, as in the introduction of new reward schemes relative to the status quo without performance incentives, or the comparison of financial versus symbolic prizes, fixed budgets versus payment for services delivered, or outputs rather than inputs."Our trial is designed to inform how service-delivery and development organizations use contest incentives, especially in field settings like the ICDS in India where links between workers' efforts and outcomes are not clear, and where social norms or intrinsic motivations play an important role.Philanthropists and public agencies who introduce new incentives typically choose winner-take-all prizes, in part because both laboratory experiments and field data suggest that these contests often elicit the most effort by top-ranked contestants.Rank-order competition offers “high-powered” incentives, concentrating all of the available reward at the margin between the best and next-best achievement levels.Winner-take-all prizes also activate behavioral motivations associated with competition itself, as many participants respond to rank-order contests with more effort than the discounted expected value of whatever material rewards are actually offered."Using such prizes may not be appropriate, however, in situations where links from effort to outcome is unclear, or there is great variation in the difficulty of each task relative to workers' ability.Cason, Masters and Sheremeta used a laboratory experiment to show that workers whose initial experience is less successful tend to withdraw from competition, thus reducing total effort expended.Brown found a similar result in real-world athletic competitions, where the presence of a superstar who reduces payoffs to others leads them to reduce their efforts.The discouragement effect that may be associated with winner-take-all contests could potentially be overcome using proportional incentives, paying bonuses to all workers in proportion to their success.Organizations like the ICDS might want to pay incentives in proportion to success partly to elicit more effort from even the lower-performing workers, but also to avoid the displacement of intrinsic motivations and social norms that is associated with winner-take-all contests.As shown by Cason et al., the higher efforts elicited by winner-take-all prizes among top competitors occur at the expense of total welfare for the sum of all workers, while the more moderate efforts exerted under proportional rewards are closer to Nash equilibrium levels and hence less likely to be regretted after the contest ends."Lazear finds that a shift from fixed wages to piece-rate pay raised effort and productivity in a manufacturing setting, demonstrating the value of introducing some kind of output-based reward, but Bandiera et al. find that a further switch from piece rates to a relative pay scheme, in which individual effort imposes a negative externality on peers by lowering the other's relative performance, led to a subsequent productivity decrease.Foster and Rosenzweig find evidence of moral hazard in effort by farm laborers, Amodio and Martinez-Carrasco find evidence of shirking from group-based incentives, and in a more extreme case, Chen finds evidence that winner-take-all rewards encourage destructive sabotage among competitors."Proportional payments at the individual level could help align incentives by rewarding each person's efforts more equally, revealing what works and reinforcing norms that could improve outcomes even after rewards are withdrawn.Our intervention was implemented in collaboration with the Social Welfare Department of Chandigarh, aiming for three specific contributions to the literature on incentives for public sector service delivery in the context of a developing country:First, we compare winner-take-all prizes with proportional reward payments in a fully controlled randomized trial, where each treatment uses the same information and involves the same fixed level of budgeted expenditure.The only difference is in the distribution of funds.Putting the same information and financial resources into each arm helps overcome the problem that previous trials often combine multiple features in ways that preclude isolating the effect of any one aspect of program design.Total funds available are identical which limits differences in overall income effects, and both schemes impose the same potential total cost to the sponsor which is known in advance and facilitates budgeting.Second, we compare differences over two rounds of outcome measurement three months apart, to test for longer-term persistence of effects after incentives are withdrawn."This is particularly important given concerns that introducing competition could change habits or norms, displacing workers' intrinsic motivations and altering social relationships as in the framework described by Franco et al.Third, our incentives are paid for health outcomes, like Gertler and Vermeersch and Miller et al. rather than service provided like Bhushan et al.We also conduct mechanism checks on what workers actually did to alter those outcomes and their self-reported level of satisfaction with their work.If one treatment arm was more successful than the other, knowing how those workers achieved that change could help scale up success elsewhere, and knowing if worker satisfaction improved or worsened is important for personnel management.Incentive schemes that target outcomes and also sustain job satisfaction are difficult to design, as shown in previous studies of village health workers, government agencies and non-governmental organizations.The incentives we offer target the primary nutritional outcome of interest to ICDS management, which is the number of children classified as malnourished by their weight for age.This type of malnutrition is defined by the World Health Organization as a weight-for-age z score that is more than 2 standard deviations below the median of a sex-specific healthy population.To inform Anganwadi workers and ICDS managers about their progress towards that objective, we created goal cards for each child, showing their current weight and, if malnourished, the target weight they would need to achieve to be classified as no longer malnourished when re-weighed three months later.For normal weight children, a threshold weight was specified below which the child would be classified as malnourished when weighed three months hence, with any such declines in nutritional status counted against any gains."This symmetry limits the degree to which workers are incentivized to help only malnourished children at the expense of the healthier children, while maintaining the ICDS management's focus on the fraction of children above the WAZ = −2 threshold.The trial design features randomization at the level of individual workers within neighborhoods.This ensures that workers have common information, similar ICDS management and other conditions.We enrolled a total of 85 workers serving a total of about 2200 children and their mothers, located in 6 slum areas outside central Chandigarh.Each worker reports to a supervisor who is in charge of one neighborhood.We randomly assigned individual workers in each neighborhood to one of the two treatment arms, and offered payments based on the total number of children in their center whose nutritional status improved, relative to the improvements achieved by other workers in their neighborhood who drew the same treatment.Our focus on the number of children who cross this malnutrition threshold is dictated by ICDS policy, and facilitated communication about the contest.Counting only the prevalence of weights above the threshold could lead workers to neglect children who are far from the thresholds, however, so future contests could use net gains in a continuous measure of nutritional status, such as average distance from a target weight.Future incentives could also take account of other child development objectives, such as attendance and learning.Fig. 1 shows the geographical location of all centers and the treatment to which they were assigned.Neighborhood boundaries are not shown, but the map clearly reveals the close proximity among centers in the most densely populated slums.Randomized assignments were carried out through lottery in the presence of other workers from each neighborhood, to ensure transparency among workers and their supervisors.Rewards were paid out in similar gatherings after three months.This approach implies that any effects we observe for PRP relative to WTA arise among workers who know that the other type of incentive exists.To compare effects of contest design among naïve workers would require cluster randomization across isolated groups, and even then workers in groups who all receive either PRP or WTA incentives could readily infer that the same funds might be paid out in different ways.Timeline and implementation involved four rounds of data collection at three-month intervals.Child weights are the basis of the intervention and also the primary outcome of interest.Each round included weighing all children at every center, and also interviewing their mothers, using trained survey staff from the private sector."We also obtained the ICDS system's own administrative data on all children enrolled in each center, their mothers, and the workers, and conducted a separate survey of workers at the end of the experiment.A first round of data collection in October 2014 was used familiarize respondents with the data-collection process, and then a second round in January 2015 was used to construct the goal cards for each worker, which we distributed to caregivers in early February along with their random assignment to either WTA or PRP incentives.Children were weighed again in April 2015 to compute payouts for each worker, which was followed immediately in early May by actual payment of prizes to winners in the WTA arm, and proportional rewards to workers with net gains in the PRP arm.A final round in July 2015 allowed us to test for persistence of any treatment effects, and at that same time we also collected data on worker satisfaction given the outcome of the trial.A visual summary of this sequence is provided in the annex of supplemental information.The workers were told that if a normal-weight child fell below their threshold weight, that decline would be subtracted from the number of children whose status improved."Payments would be based in the net number of improvements, n, recorded among the children attending the worker's center.Payments in both arms are truncated at zero, so workers cannot lose money by participating in the trial.Groups in which no workers achieved net improvements would have no payouts, and in the event of a tie among workers in the WTA treatment, two or more workers could share the prize equally."In the PRP arm, all workers who achieve some improvement receive their proportional share of the group's entire bonus.The available bonus pool for each treatment group was set at 600 Rs per worker."The level of payment is designed to be just sufficient to elicit workers' effort based on Singh and Mitra, so as to achieve the highest possible level of cost-effectiveness.Using bonus pools with a fixed budget per worker also facilitates replication across places with varying number of workers.Group size alters the payoff formula in WTA, as each worker in larger groups has a lower probability of winning a larger prize, but in PRP the payoff is less susceptible to group size as workers in larger groups can expect to receive a smaller share of a larger reward.In the neighborhoods selected for our trial, the smallest treatment area had only three workers.The minimum size for a contest is two workers, so this area could not be split between two arms, and by random draw all three workers were assigned to the WTA contest.Two neighborhoods had four centers and were split evenly with two workers in each arm.One neighborhood had seven centers and drew three in the WTA arm and four in the PRP arm.Another had 29 centers, drawing 14 for WTA and 15 for the PRP treatment, and one had 38 centers drawing 19 in each arm.The names of each neighborhood are detailed in the annex of supplemental information, showing the number of workers in each treatment group.Actual payments were based on changes in measured weights after 3 months.The number of children measured in each group at that time is listed in the annex of supplemental information.A total of 1225 children at 43 centers in six neighborhoods were measured in the WTA arm, and 1115 children at 42 centers in five neighborhoods were measured in the PRP arm.As it happened, net gains in malnutrition status occurred in only three of the six WTA groups, so only three of the 43 workers in that arm received a payout."Those prizes averaged 4000 Rs, about one month's salary for these workers.In the PRP arm, net gains occurred in all five neighborhood groups, at centers managed by 16 of the 42 workers.All 16 of them received a payout, averaging 1575 Rs each.Payouts are listed and shown graphically in the annex of supplemental information.Table 1 shows summary statistics from the Baseline-2 survey when randomization occurred.Column 3 shows the differences between the two arms along with their adjusted standard errors.None of the characteristics have statistically significant differences, so randomization between the arms was successful in these terms.Testing for treatment effects can proceed directly, but for completeness our tests include results with statistical controls for child, mother and worker characteristics, and a placebo test for artefactual effects prior to randomization and treatment.All of the child regressions have standard errors clustered at the level of the ICDS center."The summary statistics in Table 1 reveal key features of the ICDS system, including a large gap in age and education between children's mothers and the Anganwadi caregivers.Child malnutrition is widespread, with the average weight-for-age z score around −1.5 standard deviations below the median for a healthy child of their age and sex, and 28% of the children officially classified as malnourished by that standard as defined by the World Health Organization.Child diets most often include milk, ‘dal’, ‘dalia’ and roti, with less frequent consumption of green vegetables, chips and fruit, and less common intake of fruit, eggs or chicken.Worker effort is highly variable, with standard deviations almost as large as the means for how often mothers say the worker visits their home, talks about the child, hosts a meeting of mothers at the Anganwadi center, or has other meetings with the mother.Repeated measurements of child weight are vulnerable to heaping and serially correlated errors, as illustrated by diagnostic figures in the annex of supplemental information.First, histograms of weights and ages at Baseline-2 reveal considerable heaping in child weight at integer values from 10 to 15 kg.Second, a scatter plot of changes in weight-for-age z scores from Baseline-2 to Endline-1 reveals mean reversion, as children who are initially more underweight experience larger gains, and those with higher initial weight-for-age experience more decline.A pattern of this type could be due to biological or behavioral responses to weight change, but could also be driven by measurement error between the two surveys.Heaping and random errors would attenuate estimated treatment effects, but not threaten the validity of our experimental design.Our treatment variable of interest, proportional, takes the value 1 if the worker is assigned to the payment bonuses for their share of all children whose status improves and zero otherwise, so β is the average treatment effect of this relative to a winner-take-all prize.The outcome variable W is a continuous measure of child weight, in kilograms or as a z score of weight-for-age, and the subscript i represents the individual child, the subscript j represents the Anganwadi center, and the subscript t is the survey round.We use the same structure with W as the ICDS management objective itself, which is an indicator equal to 1 if the child is classified as malnourished.Other tests include mechanism checks for heterogeneity by baseline anthropometric status, mechanism checks on worker effort as reported by the mother, and tests for differences in self-reported worker satisfaction after completion of the trial.Mechanism checks for the longer-term effects include interacting the treatment with the level of payout in the short-term, to test the impact of realized outcomes on persistent treatment effects.Each test is conducted both with and without the Χijt matrix of mother and child controls, and the Cjt matrix of center-level variables.Child and mother-level controls include the sex and age in months of the child, the age of the mother, the number of children in the home, the total household income, and if the mother can read and write.The worker-level controls include the Anganwadi worker age and if the Anganwadi worker is college educated.Errors are represented by εijt and are clustered at the Anganwadi level.Attrition from one round to the next is a significant concern, as Anganwadi centers routinely have a high level of churn as children are often absent and may return or transfer to another school.Annex Tables A2.2 and A2.3 show that about 75–80 percent of the children weighed at baseline are re-weighed at the endlines, but there are no significant differences in attriters between treatment and control arms.Table 2 shows the principal result of our trial, which is an increase in weight-for-age z scores and a reduced prevalence of weight-for-age malnutrition when workers are offered bonuses in proportion to success rather than through a winner-take-all prize.Panel A shows the short-term results for which incentives are paid after 3 months, while Panel B shows long-term results 3 months later after the incentives have ended.Magnitudes are greater in the longer term, with an estimated average treatment effect on malnutrition prevalence after 3 months of 4.3 percentage points and then 5.9 percentage points after 6 months.Precision of the estimate also increases over time, with treatment effect significant after 3 months at the 0.10 level that holds only in the unconditional test, but after 6 months that significance holds even when staggering in controls for child, mother and worker characteristics, including when outcomes are measured as raw weights, and holds at the 0.05 level for the prevalence of malnutrition.We also tested for treatment effects as in columns 1–3 but controlling only for child age, and results were unchanged.Results in Appendix Table A5 show that the long-term impact was greater and more significant across all specifications."The proportional treatment's impact was twice as high after six months than after three months, for both weight and weight-for-age z scores.Thus, even after controlling for selection effects by restricting regressions to the same number of children across all rounds and specifications, we see the higher coefficients in Panel B providing supporting evidence for the explanation in Table 2.Randomization occurred within neighborhoods, so our regressions are run at the child level.Adding neighborhood fixed effects would leave the estimate using weight-for-age z scores insignificant at the 10% level, with a p-value of 0.138, but not alter the basic results for long term impacts on weight and weight-for-age malnutrition.To address sample size concerns we also conduct randomization inference using the -ritest- command of Hess, and find no change in results.The rise over time in magnitude and significance of effect sizes is a striking feature of our results."Contest design has a greater effect on outcomes after the competition has ended, when child weights no longer affect workers' financial compensation. "This could arise simply because actions taken by workers during the contest to raise child weights have increasing impacts over time, but could also arise because contest design has lasting effects on workers' intrinsic motivations and social norms.Of course it is also possible that significant treatment effects at Endline-2 are actually an artifact of the experiment.Randomization after Baseline-2 may have failed in a subtle way, despite being balanced in terms of observables as shown in Table 1."To check robustness we conduct additional falsification tests, using children's weights prior to treatment as a placebo outcome.This reveals no evidence of artefactual treatment effects, as shown in the annex of supplemental information.Table 3 addresses the mechanism of impact by testing for heterogeneity in effects among Anganwadi workers by their level of performance.Winner-take-all awards are highly powered in the sense that they concentrate all their resources on the top performers, while proportional incentives reward every increment of improvement including gains among poor performers.If WTA works best among top performers, then the average treatment effect of PRP that is observed in Table 2 must have worked primarily by raising outcomes among the below-average performers."Table 3 tests this prediction by interacting our treatment variable with the worker's relative performance as defined by equation, which is the formula used to compute payouts in the proportional arm. "For the tests reported in Table 3 we use each worker's difference from the mean level of improvement, and expect a negative coefficient implying that lower performers are more incentivized by proportional payments.Workers who are observed to be top performers, and are therefore more likely to be rewarded under winner-take-all, might have high ability.But they might also have benefited from differences among centers such as local disease outbreaks that affected the children in their care more than the children in other centers.If workers know something ahead of time about their ability and relative circumstances, then we would expect significant heterogeneity in the short run, regarding the outcomes at 3 months for which the worker will be rewarded in the contest.If workers learn about differences during the contest, or their short-run effort is measurable only through longer run changes, then we would expect significant heterogeneity in outcomes at 6 months.Table 3 presents our heterogeneity tests after 3 months in the odd-numbered columns, and after 6 months in the even-numbered columns.This reveals no significant heterogeneity in the power of incentives within the contest period, but consistent and highly significant effects in the longer run after rewards were paid.The gains from using proportional rewards instead of a winner-take-all prize occur primarily among workers who have lower outcomes relative to others."Payments made proportionally to all workers' improvements, instead of just the top performers, are more effective because they reach those who would otherwise be ignored or even discouraged by their lower rank in a winner-take-all contest.Results reported in Table 3 are potentially endogenous due to use of the endline results, so we conduct two other checks on heterogeneity in treatment response."First, we consider heterogeneity in terms of the worker's ability and circumstances, using their nutrition quiz scores, education levels and class size, as well as the mother's characteristics in terms of her nutrition quiz scores, education level, and the number of children at home. "Results reported in the annex Table A6 confirm that the overall significance of proportional rewards arises through the response of the less advantaged workers and among the mothers with more ability to respond.Then, we follow Chetty et al. to calculate the value-added scores for each worker based on the two rounds of baseline data, and use that to test for heterogeneity in response to the proportional treatment relative to the winner-take-all contest.Results reported in annex Table A7 also confirm our main finding from Table 3, showing that treatment effects are significant because of high response among those who experienced less success before the contest, and are therefore less likely to respond when the contest offers only one winner-take-all prize.Fig. 2 provides a visualization of links between the level of payouts and child weight-for-age z score in the long run, as measured at Endline-2, by treatment arm.As would be expected from the interaction term in Table 3, the slopes differ: for children in centers where workers were offered winner-take-all rewards, weights rose more in the high-payout centers, where there had been more weight gain three months earlier.In contrast, at centers where workers had been offered the proportional payments, there was a more equitable distribution of weight gain, with slightly higher weights in centers that had seen less rise from the baseline to Endline-I.Fig. 3 provides an alternative kind of visualization, drawing the entire distribution of weight-for-age z scores across all three survey rounds.At baseline, the two treatment arms have almost identical fractions of children to the left of the malnutrition cutoff of z = −2.To the right of that cutoff, the winner-take-all arm happens to have somewhat more children just above the threshold up to about −1.5, while the proportional arm happens to have somewhat more children at higher weight levels.By Endline-I and especially Endline-II, the distribution in the proportional treatment arm has shifted to the right of the winner-take-all arm, especially below the cutoff of −2."Further parametric tests regarding the mechanism by which PRP treatment results in greater improvement than WTA prizes are provided in Tables 4 and 5, using heterogeneity in terms of each child's baseline anthropometric characteristics.If PRP works primarily by spreading incentives even to workers who are less likely to be top performers, its success is likely to be concentrated among more children and centers with more malnutrition.Table 4 provides these tests using weight-for-age z scores, and Table 5 does so with weight-for-age malnutrition status.Both tables reveal that the gains from PRP occur primarily among children who are more underweight, in the sense of being further below the threshold of malnutrition status.In Tables 4 and 5, columns 1 and 5 reveal that the main effect of PRP treatment is actually negative at the mean distance of center from threshold, and the positive average treatment effect found in Tables 2 and 3 arises entirely because of greater improvement among centers who have more children who are further below the threshold when the contest starts.Caregivers in the winner-take-all control arm achieve gains primarily among children who are closer to thresholds, and these gains on average are smaller than the gains achieved by caregivers in the proportional-rewards treatment who serve a broader range of children.Columns 2–4 and 6–8 find no other kinds of heterogeneity.As might be expected, children in centers with a higher prevalence of malnutrition have lower weights, but the interaction term with PRP treatment is not significantly positive.Similarly, children that had more positive weight gain from Baseline I to Baseline II had higher weights in Endline II, but again the interaction term with PRP treatment is not significantly positive, even when that relationship is tested using a dichotomous indicator variable.In summary, our results show that financial incentives for improved outcomes have larger and more significant effects when paid in proportion to improvement, rather than as a winner-take-all prize paid only to top performers.This average treatment effect arises because of improvements among lower-ranked performers and more malnourished children, particularly after the contest ends."These effects of contest design become larger over time, perhaps because their actions during the contest cause later changes in children's weight, or because payouts in the contest influence the workers' later actions through intrinsic motivations and social norms.In previous trials, even unconditional bonuses have elicited some increased effort through gift-exchange mechanisms."Intrinsic motivations are not directly observable, but some clues may be provided by workers' responses to questions about their level of satisfaction with their own abilities, their work, and their life in general. "Table 6 tests for the effect of contest design on workers' satisfaction with their own ability, their work, and their life in general, asked after workers have seen payouts made to them and others.Our hypothesis is that PRP incentives lead to higher satisfaction levels, and does so primarily among the workers with lower payouts whose rank order makes them unlikely to be rewarded in WTA contests."These mechanism tests have less statistical power than our main results, because sample size is limited to the number of workers rather than children, and because worker satisfaction scores have less variance in response to treatment than children's weights. "Responses about satisfaction in the worker's own ability and in her work are on a Likert scale where 1 means very satisfied and 5 means very dissatisfied, with coefficients reversed so that a positive sign indicates higher satisfaction.For reporting purposes we have reversed the sign, so that a larger score indicates more satisfied.Responses about her satisfaction with life in general are on ladder scale where 1 means very unhappy and 10 means very happy.The first row of Table 6 reveals that the main effect of PRP incentives on an ‘average payout worker’ is positive but always insignificant."Columns 1–3 control for the worker's absolute level of payout, and columns 4–6 control for their payout relative to the mean of their treatment group.In both cases a lower payout implies a lower rank order, and hence smaller likelihood of success in a WTA contest."We find that the main effect of lower payout is negative but insignificant in all but column 6, and the interaction term are significant at p = 0.05 only for workers' own ability and at p = 0.10 for worker's life satisfaction.This provides only suggestive evidence that PRP incentives leave lower-ranked workers more satisfied with their abilities and with life, implying less discouragement as compared to a WTA contest.This is consistent with previous studies that find lower satisfaction among losers of a WTA contest."Table 7 tests for the effect of contest design on mothers' perception of worker effort.This is measured by interviews with the mothers of children in each center in our Endline-II survey regarding worker efforts in the previous month.We are therefore comparing responses about efforts made after the end of incentives, between workers who received PRP rather than WTA incentives."Questions ask for the mother's recall of: the number of home visits made by the worker, how many times the worker discussed the child's development at the center or elsewhere, the number of times group meetings were organized at the Anganwadi center and the number of other meetings convened by the worker. "We also ask mothers about the content of communication from the workers, which are coded as discrete indicators whether the worker gave them any advice in the previous month on nutrition and diet, hygiene, medicine, or if they showed them their child's growth chart, or scared them about consequences of malnutrition.Results presented in Table 7 show positive point estimates but no statistical significance for any of these effort measures in the proportional rather than winner-take-all treatment.This provides only suggestive evidence that efforts were higher in the proportional arm, which could be due to not having sufficiently accurate measures of what workers actually did to achieve the weight gains we observe.Future studies could focus on improving measurement of each kind of effort, so as to identify the changes made by the workers whose response to additional incentives is most successful.This study presents results of a randomized trial comparing two kinds of financial incentives to improve child nutrition outcomes at ICDS daycare centers in Chandigarh, India."Such incentives are increasingly being introduced in education, health and other services to improve outcomes that are measurable but not sufficiently rewarded by workers' current salaries and other employment arrangements.The incentives offered in this study use a fixed budget in a time-limited contest designed to elicit additional efforts, reveal best practices and pay bonuses that reward success.Our trial compares a winner-take-all prize paid to the best performer, which is by far the most widely-used contest design, against an alternative approach in which the same information is used to divide the same funds proportionally among all successful workers based on their share of measured gains during the contest period.The two treatments offered in this experiment use the same budget and identical information, differing only in whether payments are made on a winner-take-all or proportional basis.The comparison we offer between two otherwise identical incentive schemes is intended to complement the many previous trials that introduce financial rewards relative to a status-quo control or other arms in which workers receive different amounts of money and information.This paper aims specifically to complement Singh and Masters, which compares the introduction of piece-rate incentives to an unconditional fixed bonus among otherwise similar Anganwadi workers in other areas of Chandigarh.That paper found that piece-rate incentives for each increment of success led to higher performance than the fixed bonus, but also ended up paying more money to workers than the fixed bonus.Implementing piece rates is challenging in part because development agencies typically operate on fixed annual budgets.The incentive schemes introduced in this paper involve payments whose timing and upper limit of total cost is entirely predictable, differing only in how they are divided among workers."Payments offered in this trial are set at 600 Rs per worker, which amounts to a roughly 5% increment above workers' monthly salary over the 3-month period of the contest.In Chandigarh, like other parts of India, each Anganwadi workers operates her own ICDS center that typically serves about 25 preschool children every day.Centers are grouped in geographic clusters of varying sizes, depending primarily on population density of children in the low-income families that the ICDS program is designed to serve."Payouts are awarded based on each worker's performance relative to others in their neighborhood, who presumably share information and face common shocks as well as other unobservable characteristics.The bonus paid to each worker is based on the number of children at their center whose malnutrition status improves, minus any declines below the threshold weight-for-age z score of −2.Anganwadi workers were individually randomized into either a traditional WTA contest, or the more novel PRP treatment.With winner-take-all incentives, the highest-ranked worker in each cluster receives the entire reward budgeted for their cluster.When allocated proportionally, every worker with net improvements receives a fraction of that award, based on their share of all net improvements realized in their cluster.Our hypothesis is that proportional rewards would be more encouraging than winner-take-all prizes for workers whose results are unlikely to be top-ranked, thereby eliciting more total effort from the entire cluster, as suggested by theoretical models and laboratory results such as Cason et al. and evidence from athletic competitions such as Brown.Results reveal consistently beneficial effects of using proportional awards instead of the winner-take-all prize.The average treatment effect over our entire sample is a decline of 4.3 percentage points in the prevalence of malnutrition over the 3 month contest period, increasing to 5.9 points at 6 months after the contest has ended."Those improvements in the ICDS program's nutritional objective translate to improvements in average weight-for-age z scores of 0.071 standard deviation units at 3 months, rising to 0.095 at 6 months.Children in the care of workers with proportional incentives improved significantly more, especially after the contest ended, with persistent effects that could operate either through momentum in child growth or sustained changes in worker behavior.Mechanism tests reveal that, after the contest is over, the workers who were randomized into our proportional-incentive arm and were not highly ranked had higher self-reported satisfaction in life and also in ability.This parallels the greater gains observed among children in their care."Workers' efforts as reported by mothers have positive but not statistically significant point estimates.The internal validity of our results is strengthened by individual randomization into treatment groups within neighborhoods that share common information, management styles and other unobservable characteristics.We found no statistically significant differences at baseline between the treatment arms in our variables of interest, and also tested for possible artefactual effects of our study design on placebo outcomes.The data we report include child outcomes and worker satisfaction several months after the incentives were announced and paid; further work could include even longer-term outcomes, diffusion of best practices from workers who received bonuses, and selection into employment where effort is more likely to be rewarded.Replication to test external validity is facilitated by the modular structure, simple design and low cost of our study.We show that both PRP and WTA contests can be introduced into groups as small as just two workers, up to the largest group in our trial which had 19 workers."Either kind of incentive could be scaled up by local authorities across India's 1.3 million ICDS centers, or introduced to other service-delivery agencies that also aim to improve a measurable outcome.In our trial, implementation and data collection was undertaken by a local NGO, which could be done elsewhere through contracts with independent survey firms to avoid favoritism or self-interest in the measurement process."Budgeting and administration is facilitated by making the bonus pool just large enough to attract workers' attention, in this case about 5% of monthly payroll offered as a one-time incentive on a fixed timeline.One-time bonuses like those tested in this trial are often used to complement fixed salaries in settings where outcomes can be measured but other types of pay for performance are not desirable.We find that such contests elicit better results when payments are disbursed in proportion to measured gains, especially over the longer run after the contest has ended.The standard approach of a winner-take-all-prize is more highly powered, concentrating the available funds to motivate top performers, but we find that cumulative total effort of all workers is lower because winner-take-all contests provide no incentive for lower-ranked workers to improve.Proportional incentives reward every increment of success, from every worker, including those whose outcomes are initially low.Delivery of other public services in education, health and other sectors might benefit from this approach, offering opportunities for further testing and eventual scale-up for the type of financial incentive introduced in this trial.
We conduct a randomized trial to compare incentives for improved child outcomes among salaried caregivers in Chandigarh, India. A contest whose prize is divided among workers in proportion to measured gains yielded more improvement than a winner-take-all program. In our population of about 2000 children served by 85 workers, using proportional rewards led to weight-for-age malnutrition rates that were 4.3 percentage points lower at 3 months (when rewards were paid) and 5.9 points lower at 6 months (after the contest had ended), with mean weight-for-age z scores that were 0.071 higher at 3 months, and 0.095 higher at 6 months. Proportional bonuses led to larger and more sustained gains because of better performance by lower-ranked workers, whose efforts were not rewarded by a winner-take-all prize. Results are consistent with previous laboratory trials and athletic events, demonstrating the value of proportional rewards to improve development outcomes.
480
Drivers of airborne human-to-human pathogen transmission
The horror of airborne infectious diseases subsided substantially in the 20th century in developed nations, largely due to implementation of hygiene practices and the development of countermeasures such as vaccination and antimicrobials.The recent emergence of zoonotic pathogens such as avian influenza A viruses and coronaviruses and MERS CoV) raises the specter of future pandemics with unprecedented health and economic impacts if these pathogens gain the ability to spread efficiently between humans via the airborne route.While cross-species barriers have helped avoid a human pandemic with highly pathogenic avian influenza A viruses, a limited number of mutations in circulating avian H5N1 viruses would be needed for the acquisition of airborne transmissibility in mammals .A global pandemic by SARS CoV was averted largely by fast identification, rapid surveillance and effective quarantine practices.However, not all emerging pathogens can be contained due to a delay in initial detection, an inability to properly assess pandemic risk, or an inability to contain an outbreak at the point of origin.Before 2009, widely circulating H1N1 swine viruses were largely thought to pose little pandemic risk but, despite early attempts to limit spread, pH1N1 caused the first influenza virus pandemic of the 21st century.Implementation of suitable countermeasures is hampered by our limited capability to anticipate the sequence of events following the initial detection of a novel microorganism in an animal or human host.In the immediate future, the occurrence of, the detection of, and the awareness for novel epidemic agents will likely increase qualitatively and quantitatively.Updating of emergency preparedness plans in an evidence-guided process requires an interdisciplinary concept of research and public health efforts taking into account the multifactorial nature of the problem to aid policy formulation .Here we build on a conceptual framework for the classification of drivers of human exposure to animal pathogens and suggest a framework of drivers determining the efficiency of human-to-human transmission involving the airspace.The airborne transmission of pathogens occurs through ‘aerosol’ and ‘droplet’ means .In a strict sense, airborne transmission refers to aerosols that can spread over distances greater than 1 m, while droplet transmission is defined as the transfer of large-particle droplets over a shorter distance .Here, we consider airborne transmission of infectious agents in a broader sense as any transmission through the air which consists of four steps: Firstly, the pathogen is associated with either liquid droplets/aerosols or dust particles when traveling directly from donor to recipient, but may also be deposited on a surface and re-emerge into the air later; secondly, the pathogen is deposited in the recipient, usually by inhalation, resulting in infection of the respiratory tract; thirdly, the pathogen is amplified, either in the respiratory tract or in peripheral tissues; and finally, the pathogen is emergent at the site of shedding in sufficient loads and capable of expulsion.In the process of transmission, the recipient becomes a donor when microbial replication and subsequent pathophysiological events in the host result in release of the pathogen.Airborne transmission of microbes can follow different aerodynamic principles, and some microorganisms are suspected or proven to spread by more than one route .Moreover, the mode of transmission and anisotropic delivery of a pathogen into the recipient contributes to disease severity .There are no substantive differences between droplet-size distribution for expulsive methods like sneezing, cough with mouth closed, cough with mouth open, and speaking loudly one hundred words ; however, the number of respiratory droplets that likely contain pathogens can differ .After expulsion, successful transmission requires that the pathogen remains infectious throughout airborne movement, with or without an intervening deposition event.Drivers influencing the success of such a process are those that define the chemico-physical properties of both the air mass and the vehicle or carrier, including temperature, ultraviolet radiation, relative and absolute humidity, and air ventilation or air movement .Their interplay ultimately determines pathogen movement and stability .Pathogen survival is also influenced by pathogen structure, for example, enveloped viruses are less stable outside the host than non-enveloped viruses .Among Chlamydia,pneumoniae, Ch.trachomatis LGV2, Streptococcus pneumoniae, S. faecalis, Klebsiella pneumoniae, and cytomegalovirus, the survival of Ch.pneumoniae in aerosols was superior .Variation in RH might influence not only environmental stability of the pathogen but also the droplet size which in turn defines deposition rate .Eighty percent of droplets emitted from a cough deposit within 10 min, and highest deposition rates for all droplet-nuclei sizes range within 1 m horizontal distance .Pathogens like influenza virus can persist in the environment for hours to days and have been found on surfaces in healthcare settings .UV radiation is the major inactivating factor for influenza viruses in the outdoor environment.Pathogen-containing large particles deposit predominantly in the upper airway, medium-sized particles mainly in central and small airways, and small particles predominantly in the alveolar region of the lungs .In general, airborne pathogens tend to have a relatively low infectious dose 50% value.At any specific site of deposition within a host, ID50 of a pathogen is determined by factors such as local immune responses and the cellular and tissue tropism defined by distribution of receptors and/or adherence factors, tissue temperature, pH, polymerase activity of the pathogen, and activating proteases.Co-infections may alter immune responses and factors that govern tropism.Pathogens amplify either at the site of initial deposition in the respiratory tract or in peripheral tissues.For influenza virus or human respiratory syncytial virus, this is the site of initial entry whilst other pathogens have either distinct secondary amplification sites or replicate both locally and systemically, for example, Measles virus, Nipah virus and Mycobacterium tuberculosis.Microorganisms often damage host tissue through the release of toxins and toxic metabolites, as a direct result of replication, or as a consequence of activation and infiltration of immune cells .This may allow the pathogen to spread in the body and replicate to sufficient numbers to favor onward transmission.Self-assembly in highly organized, surface-attached, and matrix encapsulated structures called biofilms enhances microbial survival in stressful environments and may even harbor drug-tolerant populations.Biofilms of M. tuberculosis can contain resistant populations that persist despite exposure to high levels of antibiotics .A third strategy is followed by many human pathogens that have at some stage evolved or acquired a range of mechanisms allowing host immune antagonism to successfully replicate to high loads in the presence of innate and/or adaptive immunity .However, several rounds of stuttering chains of transmission may be required before a pathogen can successfully emerge into a new host population.Multiple forays of the pathogen into the population may serve to create a level of immunity, thus establishing conditions favoring more prolonged outbreaks following a reintroduction of the infectious agent which then enables sustained pathogen/human host co-evolution.Evolution to allow host adaptation and/or airborne transmissibility can be driven by changes in pathogen population genetics at the consensus level but also by genetic variability of the entire population.RNA viruses exist as a population of closely related genetic variants within the host.The ability of a pathogen to generate a genetically diverse population is considered critical to allow adaption when faced with a range of selective pressures .High-fidelity poliovirus mutants that produce viral populations with little genetic diversity are attenuated despite apparent overall identical consensus sequences to wildtype strains .Factors enabling acquisition of novel traits through high mutation rate, and/or a propensity to acquire novel genetic material through re-assortment or recombination are important drivers at the level of pathogen replication, amplification and adaptation within the host.Bacteria additionally deploy transfer of mobile genetic elements to acquire novel virulence factors, toxins and/or antimicrobial resistance.Microbial adaptation by drug resistance is often attained at the cost of decreased fitness, as illustrated by a reduced growth rate of M. tuberculosis strains resistant to isoniazid , but multi-drug resistant strains can be up to 10 times more or 10 times less transmissible than pan-susceptible strains .Heredity of susceptibility on the host side enhances the risk of disease and transmission, as seen historically with the selection of populations with innate tuberculosis resistance by the evolutionary pressure of several, hundred-year-old epidemics in Europe and North America .The high viral load in the upper respiratory tract combined with a strong cough reflex drives efficient host-to-host human transmission of MeV .At the level of virus amplification in host epithelia, lesions are observed in the later stages of disease as well as extensive infiltration of infected tissue by immune cells.It is currently unclear to what extent the host immune response contributes to formation of pathological lesions and to the transmissibility of the virus.Levels of exhalation of infected particles significantly vary interpersonally , for example, for M. tuberculosis and for influenza A virus , both in healthy and in virus-infected persons , as well as intra-individually over time .Drivers of airborne transmissibility appear to be closely linked to disease progression and severity, to the incubation period and onset of symptoms in M. tuberculosis .Determinants that confer efficient airborne transmissibility for zoonotic pathogens among humans, thus permitting pandemic emergence, can be defined as qualitative or quantitative changes in key factors that govern one or more of the four stages of the transmission circle.The sum of changes in spatial and temporal terms that facilitate or limit the progression to successive stages of the circle corresponds to the evolution of emerging pathogens.To predict the likelihood that a certain pathogen identified in, or next to, a human being will become a pandemic threat, a multilevel framework of drivers has to be considered.Categorized in a descriptive manner, drivers may act at the level of cells and tissues, of individuals, communities and countries, or even on a global scale.However, drivers are highly interactive and may be effective at different scales or levels.As an example, host susceptibility might be directly encoded for in the host genome.Once the transmitted pathogen finds an entry into such a susceptible human host, its interaction with the host immune system ultimately determines whether infection is established and whether it progresses to a point where onward transmission to a new host is facilitated.These interactions are influenced at the individual or host level by other drivers governing an immune response.These drivers in turn may include the host genetic background as genetic entities impact on the kind of immune response developed by an individual or a group of individuals within a population.Pre-existing immunity or the presence of concurrent infections or disruption to the normal functioning of the immune system including co-morbidities which may represent more distal drivers or in turn modulate more distal drivers, play important roles.With effective treatment, the contagiousness of a particular disease may be reduced not only by decreasing the number of the pathogens in the infected site and those that will be expectorated but also by introducing, for example, antibiotic into the infectious droplet nuclei .Many drivers are acting at the tissue or individual level but are themselves subject to modification at the community level by, for example, the frequency and intensity of social contact and the composition of the group a person is interacting with, for example, in terms of health and hygiene standard and age distribution.Examples are tuberculosis in the working class in the age of industrialization and the Spanish flu during World War I.More distal factors are relevant at the country level including host population genetics, demography, public health strategies for treatment or vaccination or access to medical care, which are likewise, to some extent, governed by socio-economic drivers.Air pollution, land use, urbanization and socio-economic changes are important drivers of emergence of airborne infections at the supranational level.Although human population densities have continued to rise and reach unprecedented levels, airborne diseases of public concern in developed countries in the second half of the 20th century have typically comprised relatively self-limiting or preventable diseases like the common cold, seasonal flu and MeV.Continuous developments like agglomeration of settlement areas in developing countries, along with urbanization and rural depopulation, and exponentially increasing human movement in numbers and distances on a global scale, however, may outpace contemporary achievements in disease prevention.Climatic changes also occur at a global level, which could have serious impact on infectious diseases in humans and animals .Extreme weather conditions alter seasonal patterns of emergence and expansion of diseases even though direct proof for the influence of climate change on regional, national, supranational or global level on the emergence of new or frequency of established infections is difficult to obtain.To mirror the complexity of the problem we suggest a concise and weighted framework of drivers taking into consideration different levels, from cell and tissue through to global scale but also the multitude of influences between these levels.While some drivers at an outer level may only imprint on one driver in the level below, other drivers can impact several lower level drivers and in sum may be equally important to drivers considered to act from an outer level.Classification of drivers as acting at a more proximal or more distal level also is not exclusive and inverted imprinting may occur under several circumstances as outlined above.Despite decades of research on ‘airborne transmission factors’, surprisingly few quantitative data is available for factors impacting the majority of infectious diseases that transmit via this route.Furthermore, for some microorganisms, for example, for coronaviruses, epidemiological or experimental evidence that transmission of the pathogens via the airborne route is successful or even contributes importantly to epidemic or pandemic spread of the agent remains weak.A large body of work is focused on influenza viruses and the indoor environment, likely because perturbations of the indoor pathogen transmission ecosystem are easier to generate and quantify."Published studies assessing viable pathogen counts directly from subject's respiratory maneuvers are restricted to a few respiratory pathogens.Evaluation of factors underlying the highly variable levels of pathogen shedding, for example, by long-term examination of individuals to determine how pathogen load changes during infection of the respiratory tract with different viruses and bacterial species are urgently needed.Current technical developments may open novel experimental opportunities .When designing such studies, consideration of zoonotic and human-specific pathogens as well as delineating strategies employed by both viruses and bacterial pathogens will help to identify commonalities in the strategies followed by successful airborne pathogens.Especially investigation of organisms assumed to have no capacity of human-to-human transmission via the airborne route in suitable animal models will help explain why some pathogens, despite having a very low infectious dose, are likely not directly transmitted from infected persons.Examples would be the human-to-human transmission of Yersinia pestis versus Francisella tularensis, or the assessment why Legionella pneumophila transmission can occur over long distances from artificial sources , but usually not inter-personally.Beyond representing an ever-increasing ethical and economic burden, the recent more frequent occurrence of zoonotic pathogens in the human population also inheres in the unique opportunity to further our knowledge of the prerequisites for airborne pandemic spread.Genomics-based methods have already allowed a significant advance in our understanding of the evolution and spread of bacterial pathogens.In the developed world, whole genome sequencing is being established for routine use in clinical microbiology, both for tracking transmission and spread of pathogens, as well as prediction of drug-resistance profiles, allowing rapid outbreak detection and analysis in almost real-time, as evolution occurs in the wild .Except for influenza A viruses, genetic correlates of the ability of a zoonotic pathogen to efficiently overcome the interspecies barrier and allow rapid spread within the human population are poorly defined.Any emergence of novel molecular patterns in microorganisms results from an evolutionary process driven by factors not encoded for in genomes and determined by the frequencies of genome alterations occurring under natural conditions."The broad introduction of ‘omic's’ technologies, advances in global data exchange capabilities and the advent ofinformatic tools allowing processing of large data collections have put the technical capacity to integrate phenotypic data from clinical, from epidemiological and from experimental studies in vitro and in vivo at our disposal, with relevant target species like livestock in particular, and allow genome-wide association studies.Deploying the categorization of drivers and the relative level of their impact as suggested herein will allow for a weighting of the different drivers in the specific framework for any particular pathogen, and help to predict the pandemic potential of airborne pathogens.Papers of particular interest, published within the period of review, have been highlighted as:• of special interest,•• of outstanding interest
Airborne pathogens — either transmitted via aerosol or droplets — include a wide variety of highly infectious and dangerous microbes such as variola virus, measles virus, influenza A viruses, Mycobacterium tuberculosis, Streptococcus pneumoniae, and Bordetella pertussis. Emerging zoonotic pathogens, for example, MERS coronavirus, avian influenza viruses, Coxiella, and Francisella, would have pandemic potential were they to acquire efficient human-to-human transmissibility. Here, we synthesize insights from microbiological, medical, social, and economic sciences to provide known mechanisms of aerosolized transmissibility and identify knowledge gaps that limit emergency preparedness plans. In particular, we propose a framework of drivers facilitating human-to-human transmission with the airspace between individuals as an intermediate stage. The model is expected to enhance identification and risk assessment of novel pathogens.
481
Long-term biocompatibility, imaging appearance and tissue effects associated with delivery of a novel radiopaque embolization bead for image-guided therapy
Embolotherapy is a procedure for introducing a variety of agents into the circulation in order to occlude blood vessels for therapeutic intent, for instance to prevent bleeding, to arrest flow through abnormal connections such as arteriovenous malformations or to devitalise a structure, organ or tumorous mass by inducing ischemic necrosis.Embolic agents come in many different forms including microparticles, pellets, glues or metallic coils .These embolics are administered to targeted tissues through a catheter inserted and manoeuvred through the vasculature into the desired location.One common application of this technique is the use of microparticles, most usually microspheres/beads to treat tumors in the liver where the intention is to physically occlude the vessels feeding the tumour in order to induce localised ischemic necrosis of the malignant mass.However, perfect tumour targeting of microparticles is not possible and embolization of normal adjacent healthy liver parenchyma is inevitable.The embolization of healthy liver parenchyma can induce significant local tissue changes, including elevated serum liver enzymes and tissue damage, but these side-effects typically resolve over time .However, there should be no chronic material-related inflammatory reaction or clinical sequelae.The microparticles themselves may disappear over time if bioresorbable, or if non-degradable, should be sufficiently bioinert and well-tolerated in tissue where they reside.Embolotherpy with microparticles is conducted under X-ray based image guidance where injection of iodine-based liquid contrast agent, a radiodense material, is used to create a roadmap of the network of blood vessels to be embolized.Some embolic devices such as coils are inherently radiopaque which enables them to be easily located during and post procedure.Liquid embolics such as Glue or Onyx® are often mixed with radiodense materials prior to use in order to impart radiopacity .Microparticles are usually composed of synthetic and natural polymers that are radiolucent and therefore cannot be directly seen during delivery which requires the addition of iodine-based liquid contrast agent to monitor their delivery.However, only the degree of blood flow cessation indicates when sufficient embolic agent has been delivered to achieve the desired flow-based embolization end-point.It has been demonstrated using embolic bead devices that there is a degree of trapped residual soluble contrast agent retention at the site of embolization that dissipates over the next several hours post-procedure .CT imaging within a 6 h time frame post-delivery therefore, provides contrast retention as a surrogate marker of bead location and some degree of comfort that the correct blood vessels have been embolized.The exact bead location, however, remains unknown.The concept of intrinsically radiopaque embolic beads has been explored for many years and multiple experimental studies are available in the literature.In some cases the beads have been made radiodense by the incorporation of metallic components such as tantalum or barium .This can have significant effect on the handling and administration of the beads as the increased density induces rapid sedimentation .There is also concern for the long-term fate of the entrapped contrast material and the potential for leaching into the surrounding tissue over time.The incorporation of iodine-containing species into polymers has therefore been a more widely studied approach, resulting in materials useful as bulking agents , in bone cements , for nucleus pulposus replacement and as microparticle emboli .The radiopacity can be introduced by means of an iodine-bearing monomer at the polymerization stage or by chemical attachment of an iodinated species with reactive functionality to preformed polymer microspheres .Compounds based upon iodinated benzyl groups are convenient starting materials for either of these approaches as they provide for synthetic flexibility and enable high iodine content per unit mass.It is for these reasons that such compounds are the basis for most of the commercially-available soluble contrast media).Radiopaque beads have been prepared based upon incorporation of 2,3,5 triiodobenzyl moieties) , but whilst they possessed high unit iodine content in the 25–30 wt% region, this somewhat compromised the hydrophilicity and softness attributes that are desirable for the handling and microcatheter delivery of embolization beads.Horák et al. tried to address this problem with the synthesis of 3--2,4,6-triiodobenzoic acid) and its subsequent copolymerization with HEMA in the presence of additives to induce porosity to the microspheres .They found it necessary to incorporate at least 27 wt% iodine for adequate radiopacity but they experienced issues with irregular particle formation and agglomeration during the polymerization.Others have attempted to increase the hydrophilicity of the system by utilising the mono-iodinated 2--oxo-ethylmethacrylate monomer) copolymerised with hydrophilic comonomers such as hydroxyethyl methacrylate or 1-vinyl-2-pyrrolidinone .Whilst this did enable the synthesis of some microsphere formulations that were water-swellable in nature, only those with low water content and iodine contents of ∼20 wt% were sufficiently radiopaque to be useful in practice.While it has been difficult to establish a balance between the appropriate physicochemical properties and useful levels of radiopacity, it has been demonstrated that the materials based on the triiodinated chemistry display good biocompatibility .In vitro cell-based analyses show no indications of cytotoxicity or effect on cell proliferation, while in vivo implantation studies show they are well-tolerated with no signs of adverse tissue reactions.We have therefore recently reported on efforts to modify LC Bead®, well-characterized polyvinylalcohol-based hydrogel beads for embolization of hypervascular tumors and AVMs.Approaches were developed to activate the bead chemistry towards triiodinated species) to render them radiopaque whist maintaining their hydrogel nature .We have since optimized this chemistry and have a process for manufacture of intrinsically radiopaque beads that have water contents in the region of 60–70% with iodine contents in the range 189–258 mg/mL true bead volume .This provides for an excellent degree of radiopacity coupled with the benefits of a hydrogel performance).The additional visual information provided by these beads may provide tools for standardization and reproducibility of end points and treatment effects in addition to offering better conspicuity to determine target and non-target embolization .Furthermore, the durable imaging appearance of the beads may also aid in the guidance and evaluation of the embolization procedure.Intra-procedural identification of tissue at risk for under-dosing or under-treatment can better inform the physician of options to immediately target this tissue with additional therapies rather than waiting for the outcome of follow-up scans .In this study we present the outcome of the biocompatibility testing of the optimized product, LC Bead LUMI™, and investigate the long-term effects up to 90 days post embolization in a swine liver.We also concurrently examine the bead location with X-ray fluoroscopy and computed tomography as well as the associated tissue changes resulting as a consequence of their embolization.LC Bead LUMI™ used in this study was prepared and characterized as described previously .Briefly, sulfonate-modified acrylamido-polyvinylalcohol beads were made using a reverse suspension polymerization process .Triiodobenzyl groups were coupled to the PVA chains of the preformed beads using a proprietary process to yield beads that were sieved into different size fractions, dispensed into vials and steam sterilized.The 70–150 μm size range of RO Beads was selected for the biocompatibility and embolization studies as these provide a high challenge given the large number of beads and high surface area for a given volume.Visipaque™ 320 was used to make the bead suspension.Each vial of beads.The biological evaluation took into account the anticipated nature and duration of contact with RO Bead, which is a permanent implanted device in contact with blood.All biocompatibility testing studies were performed by NAMSA in accordance with current Good laboratory Practice regulations.In vivo studies conformed to NAMSA Standard Operating Procedures that are based on the “Guide for the Care and Use of Laboratory Animals, National Academy Press, Washington, D.C., 2011.,The use of a control group is uncommon in this type of safety assessment and not straight forward where both biocompatibility and imaging aspects are under investigation.The non-radiopaque equivalent of LC Bead LUMI™ is LC Bead™, which has itself been tested in large animal safety studies and hence the biocompatibility and tissue reaction to the bland embolization using this device is known and reported .From an imaging perspective, the usefulness of RO Beads regarding intra-procedural visibility has been demonstrated already using control non-RO and RO Beads without contrast in two recently published articles .No control group was therefore selected for comparison in this study.This evaluation was performed in a swine liver embolization safety model at MPI Research Inc. using total of ten male experimentally naïve domestic Yorkshire crossbred swine.The study was split into two phases: a shorter pilot study in which a maximum volume of RO Beads permitted delivery was fixed at ≤ 6 mL and sacrifice was scheduled for 14 days; and a longer term main study in which the volume of RO Beads permitted delivery was raised to ≤ 10 mL and sacrifice was scheduled at 32 days and 91 days.The purpose of the pilot study was to determine a volume of liver tissue embolized by a given volume of beads, whilst avoiding off-target embolization in adjacent organs such as the stomach, small bowel, pancreas and spleen before proceeding to the longer term study.Imaging was performed at multiple time points across both studies involving X-ray fluoroscopy, digital subtraction angiography, single shot X-ray and CT with and without IV contrast injection.MPI Research Inc.Standard Operating Procedures conditions conformed to USDA Animal Welfare Act and “Guide for the Care and Use of Laboratory Animals”, Institute of Laboratory Animal Resources, National Academy Press, Washington, D.C., 2011.Vascular access was obtained through femoral artery cut-down and a sheath was placed for stable access.Through this sheath, a combination of a guiding catheter, a 2.7 French microcatheter and a microguidewire was used to select the target hepatic lobe arteries.A consistent angiography and embolization protocol was used during the study.Under fluoroscopic guidance, a guiding catheter was placed at the entrance to the coeliac artery and an angiogram was performed to visualize the branches of the coeliac artery.A microwire and microcatheter combination was then used to select the common hepatic artery, and lobar hepatic arteries supplying approximately 50% of the total liver volume.This lobar selection favored the left lateral or left median liver lobe whenever anatomically feasible.In general, a larger artery size with no extrahepatic branches was preferred and care was taken to avoid any vascular spasm or injury during this step.A pre-embolization angiogram was performed to confirm the catheter tip position, define hepatic artery anatomy and blood flow, and visualize the liver volume to be embolized.With the catheter in position and an appropriate target location identified, the bead suspension was administered slowly under fluoroscopic guidance taking care to minimize reflux and avoid any extra-hepatic non target embolization.A post-embolization angiogram was also obtained.The fraction of liver area embolized was estimated from the post embolization angiography and DSA images.This was correlated and confirmed on the post embolization CT images, which provided a good approximation of the total volume of liver embolized.LC Bead LUMI™ was prepared as a 1:10 dilution using iodinated contrast medium and delivered through the microcatheter under continuous X-ray fluoroscopic guidance.All angiography and embolization procedures were performed using the GE OEC 9000 elite or GE OEC 9800 C-arm units with the standard GE cardiac software package allowing cine loops at 30 frames per second as well as digital subtraction angiography.These C-arms allow for some automatic adjustments but basic fluoroscopy parameters used were peak kilovoltage range of 85–95 and current range of 30–150 milliamperes and dose range between 3.5 and 6.5 mGym2.Fluoroscopy and digital subtraction angiography images were obtained before and after embolization to document changes in arterial flow and to visualize the location of the beads in the embolized target.The embolization was performed to clinically relevant angiographic endpoint of fill in Ref. and care was taken to minimize off target embolization.Review and comparison of pre- and post-embolization fluoroscopy and DSA images demonstrated the embolized target lobe, associated changes in blood flow, and location of RO Beads in each animal.Single shot X-ray images were also obtained with temporary suspension of respiration approximately 5 min after completion of embolization to also allow for “wash out” of residual liquid contrast and better visualize the RO Beads.All CT scans were performed using a GE Lightspeed 16 CT scanner and performed with helical acquisition using the following parameters: thickness 1.25 mm, interval 1.25, kV 120, and mA 220–350.Multiplanar reformation in sagittal and coronal planes was also performed.Abdominal CT scans without administration of IV contrast were obtained before and 1 h after embolization with RO Beads.Additional CT scans with and without administration of IV contrast) were obtained at 1, 7, 14, 30 and 90 days following embolization according to the imaging schedule in Table 1.The contrast-enhanced CT images in this study were routinely acquired in both portal-venous and arterial phase.To the experienced user, close inspection of arterial phase CT images does show subtle differences in appearance of arteries containing RO Beads compared to those without beads, but this may not be clinically relevant compared to the non-contrast and portal-venous phase CT images.The non-contrast phase clearly shows the location and distribution of the beads and the portal venous phase shows parenchymal changes and venous structures in addition to the bead-filled arteries.CT Images were reviewed in axial, coronal and sagittal planes as well as in reconstructed maximum intensity projection images, by an Interventional Radiologist with over 10 years experience in clinical practice and highly familiar with preclinical evaluation in the porcine liver embolization model.The MIP images display bone and other radiodense structures such as IV contrast and RO bead-filled vessels preferentially with other lower-attenuating structures being less well visualized.The CT scans were reviewed for the following: i) visibility of RO Beads, ii) location RO Beads within the liver, iii) approximate area/volume of liver embolized, iv) extrahepatic off target embolization in adjacent organs including the stomach, duodenum, spleen, pancreas and lungs, v) and other imaging findings which could be related to hepatic embolization and associated ischaemia.Clinical pathology evaluations were conducted on all animals before and after embolization on days 2, 7, 14, 21, 32, 61, and 91, as applicable.The animals had access to drinking water but were fasted overnight prior to each scheduled sample collection.Blood samples were collected from the jugular vein.The samples were collected into the appropriate tubes for evaluation of haematology, clinical chemistry and coagulation parameters.On occasion, blood samples were redrawn from animals if the initial samples had clotted.Urine samples were collected using steel pans placed under the cages for at least 16 h.Histopathology was conducted on tissues harvested post sacrifice on the scheduled day.Necropsy examinations were performed under procedures approved by a veterinary pathologist on all animals.The animals were euthanized by sedation with an intramuscular injection of Telazol, followed by an intravenous overdose of sodium pentobarbital solution and exsanguination by transection of the femoral vessels.The animals were examined carefully for external abnormalities including palpable masses.The skin was reflected from a ventral midline incision and any subcutaneous masses were identified and correlated with antemortem findings.The abdominal, thoracic, and cranial cavities were examined for abnormalities.The organs were removed, examined, and, where required, placed in fixative.Special attention was paid to the liver for abnormalities, as well associated vasculature, surrounding tissue, gallbladder, hepatic bile ducts, GI tract, lungs, heart, kidney, spleen, and brain.All tissues were fixed in neutral buffered formalin.Microscopic examination of fixed hematoxylin and eosin-stained paraffin sections was performed on sampled sections of tissues.For some sections, Russell-Movat Pentachrome stain was used in order to evaluate the presence of collagen and mucin.The slides were examined by a board-certified veterinary pathologist.Photomicrographs of representative lesions seen during the microscopic examination, including those considered to be treatment related, were taken.The biocompatibility tests conducted and their outcomes are listed in Table 2.All of the in vitro and in vivo biocompatibility tests performed showed that neither the RO Bead nor any of its extract solutions elicited any response that would be a concern when extrapolating to the target liver cancer patient population.Over the course of the study, animals consumed their daily allotment of food, gained weight in a normal manner and were considered to be in good health.All animals survived to the scheduled necropsy interval with the exception of animal number 10.Animal number 10 was found dead during the anaesthetic recovery period following the Day 32 CT imaging."Off-target embolization of the phrenic artery may have caused respiratory issues that contributed to this animal's early expiration.Prior to the Day 32 imaging procedure, animal number 10 was considered to be in good health and was gaining weight normally.There were no RO Bead-related effects among haematology parameters.All mean and individual values were considered within expected ranges for biological and/or procedure-related variation.At 1 h post-embolization there were mild transient decreases in multiple haematology endpoints that were typical of a dilutional effect caused by intravenous fluid administration during the embolization and imaging procedures.These findings included mild decreases in red cell mass; and leukocyte, platelet, reticulocyte, lymphocyte, monocyte, and basophils counts, relative to pretest.By day 2 these findings had resolved and were generally comparable to pretest for the remainder of the study.There were no effects of RO Bead on coagulation times at any collection interval, up to and including the Day 91 collection.All individual and mean values were considered within an acceptable range for biologic, procedure and assay-related variation.There was a mild and transient increase in fibrinogen at the Day 2 and 7 collection, relative to pretest values.Increases in fibrinogen were consistent with an inflammatory stimulus, as related to the anticipated liver tissue injury/embolization; these effects had resolved by Day 14. .All other mean and individual values were considered within expected ranges for biological and procedure-related variation.At the Day 2 collection, there was evidence of hepatocellular injury, including mild increases in aspartate aminotransferase, alanine aminotransferase, sorbitol dehydrogenase, and lactate dehydrogenase, relative to pretest values.Hepatocellular injury was attributed to hepatic artery embolization with resulting hepatocellular ischemia.These effects were mostly resolved by the Day 7 collection.These transient elevations in liver enzymes are also routinely seen in clinical liver embolization procedures.RO Bead suspension was found to be durable and easy to handle .Agitation of the delivery syringe every two to three minutes was adequate to maintain a good suspension and allowed for effective delivery through the 2.7 French microcatheter without any catheter occlusion.Selective catheterization of the hepatic lobar branches was successfully achieved in all animals without complication.Embolization was performed using RO Bead suspension volumes ranging from 2.2 to 10 mL across the two phases of the study.Hepatic embolization with RO Beads was performed with careful technique using slow injection and continuous fluoroscopic monitoring to devascularize target arteries yellow arrows for example) while avoiding reflux and off target extrahepatic embolization.DSA was useful to show hepatic arteries containing RO Beads that no longer filled with IV contrast, demonstrating successful embolization; DSA subtraction artefacts were also indicative of the bead location in the arteries.Single shot X-ray better visualize RO Beads.It is the case that at very early time points during embolization, before soluble contrast washes out, both beads and contrast are present in the arteries and contribute to the observed image.However, as the soluble contrast washes out, what is seen is largely the contribution of the beads.The CT scans obtained after the procedure and repeated over time up to 90 days confirm that the areas seen on the fluoroscopy and DSA images were in fact the location of the RO Beads.Review of multi-planar CT data sets easily depicted the location and three dimensional distribution of RO Beads within the hepatic arteries of the embolized liver lobes, confirming and further detailing the fluoroscopic findings.Although a systematic and quantitative review of visualized artery size was not undertaken, bead-filled arteries measuring in the range of 1.5–2.5 mm or larger were routinely seen on CT scans.Micro-CT is a higher resolution imaging method that could be used on ex-vivo samples to more exquisitely image bead distribution, but it was not performed in this study.We have previously reported on the distributions of similar RO Beads in swine liver using microCT .The correlation between vessel sizes filled with beads and imaged using conventional CT and micro-CT is the subject of a follow-up investigation currently in progress and will be reported separately in due course.The RO Beads were easily visible on all post embolization CT scans obtained without administration of IV contrast and moreover, was also visible on CT scans obtained after administration of IV contrast.This demonstrates that RO Beads are highly dense and can be visualized with CT even in the presence of tissue contrast provided by IV contrast administration.CT scans obtained over time demonstrate the durable visibility and imaging appearance of RO Beads.Axial plane soft tissue window CT images obtained without soluble contrast administration from a representative animal are shown prior to and then at 7, 30 and 90 days following embolization with RO Beads.These images clearly demonstrate beads located in the hepatic arteries of the embolized swine liver.Clearly, the bright lines of beads are only observed in the post embolization scans and remain visible with no obvious deterioration in image density over the 90 day period of the study.The appearance of the RO Beads in the embolized liver, including their unchanged density and distribution over the 90 day time period are better appreciated on the coronal plane bone window thick slab MIP CT images obtained with soluble contrast administration.These images show bead filled arteries in relation to the soluble contrast filled portal veins.Close examination of CT scans shows no evidence of off target embolization in adjacent organs including the stomach, duodenum, spleen, pancreas and lungs.In 4 of 10 animals, focal areas of decreased contrast enhancement on venous phase CT images consistent with reduced perfusion and hepatic ischemia in the targeted liver lobes were seen on CT scans obtained 7–90 days after embolization.These expected effects of embolization are shown in Fig. 4.These tissue changes were often closely co-located with the presence arteries densely filled with beads.The overall size of these areas of reduced enhancement often decreased at days 30 and 90 compared to day 7, suggesting some interval recovery.These areas also corresponded to those seen in the pathological analysis composed of wide areas of hepatic necrosis.The 14, 30 and 90 day CT scans also show some dilated bile ducts in the embolized liver, most of which are in a peripheral location.These are best seen on scans with IV contrast and are often located adjacent to embolized arteries; most likely related to bile duct ischemia caused by embolization.This also correlates well with the observed presence of bile duct hyperplasia seen in some of the pathological analysis of these samples.This is an expected finding as the blood supply to the bile ducts is solely derived from branches of the hepatic artery.The surrounding hepatocytes, on the other hand, derive their blood supply from both the hepatic artery and portal vein and, therefore are relatively protected from arterial embolization.Although no off-target embolization to organs adjacent to the liver was seen in any of the animals, one animal failed to reach its scheduled sacrifice time point at 91 days since it did not recover from the anesthesia following the 30 day CT scan.Embolization of the phrenic artery was clearly identified by the presence of RO Beads in this artery on the post-embolization CT scans.It is possible that the hepatic ischemia/necrosis in the left lobe in this animal) and the non-target embolization of the phrenic artery both contributed to difficulty in recovery from anesthesia and early expiration of this animal.Macroscopic observations in the embolized portion of the liver included the appearance of multiple focal tan regions.These macroscopic observations were associated with microscopic findings of hepatic necrosis, fibrosis, and leukocyte infiltrates anticipated following embolization of liver arteries with microparticles.In one animal, abdominal cavity adhesions and swollen/thickened gallbladder were associated with microscopic findings of gallbladder necrosis, oedema, fibrosis, and neutrophil infiltrate due to injection of RO Beads.Review of the imaging confirmed that the artery supplying the gall bladder was included in the target region embolized in this animal.Therefore, the macroscopic and histological changes observed in the gall bladder were related to and expected from embolization of this artery.No bead-associated tissue changes were present in brain, common bile duct, heart, kidneys, lung, pancreas, spleen, duodenum, jejunum, ileum, and cecum of any of the animals.RO Beads were present within arteries of the treated liver lobe from all treated animals as expected.In animal number 1 and 3, RO Beads were present within blood vessels of the treated liver lobe and gallbladder).Lesions in the gallbladder wall of animal 3 were severe necrosis and oedema, moderate fibrosis, and mild neutrophilic infiltration, which was noted in the CT imaging analysis as abnormal gallbladder wall thickening.Focal coagulative necrosis of several liver lobules and mild increased interlobular fibrosis were present in the treated liver lobes).These changes are consistent with hepatic ischemia resulting from embolization and also consistent with the areas of decreased enhancement on the CT scans.In animal number 4 RO Beads were present within blood vessels of treated and left lateral liver lobes.Focal coagulative necrosis of several liver acini was present in the left lateral liver lobe.A zone of fibrosis and mixed leukocyte infiltrate was adjacent to the necrotic area).RO Beads were often observed clustered together within the larger arteries.The beads appear as dark blue objects due to the H&E staining and maintain a spherical morphology in the vessels, surrounded in a matrix of granulation tissue.Often the bead may appear misplaced out of the artery, or completely missing, leaving a hole where it was originally embedded; this is an artefact of the sectioning of the sample).Enlarged bile ducts are also visible, a hyperplasia associated with the ischemia induced by the embolization of the artery which provides the bile duct blood supply.This observation supports the presence of dilated bile ducts noted on the CT imaging shown in Fig. 4.Occlusion of medium sized arteries was noted in animals 6 and 10.Affected arteries had lodged beads surrounded by mesenchymal/epithelial cells and deposition of mucin and connective tissue resulting in complete filling of the vessel lumen.The beads took on an interesting pattern of staining from the Pentachrome stain used to highlight the connective tissues, appearing like frogspawn with a clear outer layer).The endothelium of affected vessels was frequently absent and the beads were immediately adjacent to the elastic membrane.Microvascular networks were present within the newly formed intravascular connective tissue stroma in some sections.These tissue observations are consistent with a the classic foreign body response, with the arterial structure being remodelled with an initial invasion of inflammatory cells with these subsequently replaced by connective tissues with the beads being walled off by a fibrotic layer).On Day 91, RO Beads were present in remnants of blood vessels or what appeared to be hepatic parenchyma.In arteries, the lumens were replaced by mature-appearing collagen/connective tissue that occluded the vessel and encompassed the beads.Small arteries/veins were present in the new collagen/connective tissue.Individual beads were observed interspersed between hepatocytes.The beads were frequently near blood vessels but were surrounded by hepatocytes.There was no evidence of tissue injury or inflammatory response to the beads indicating that the material that the beads are composed of is highly biocompatible in nature.The objective of this study was to evaluate the safety, biocompatibility, imaging appearance and tissue effects of hepatic embolization with a novel radiopaque bead, in a swine model.The results of this study demonstrate that administration of up to 1 mL of sedimented LC Bead LUMI™ via the hepatic artery produced no effect on food consumption and the ability to gain body weight, electrocardiographic endpoints and urinalysis parameters.Transient changes were noted in haematology and coagulation values but these were considered to be related to the embolization-associated ischaemia.Embolization with RO Beads was also associated with minimal transient increases in the activities of AST and ALT on Day 2, but these had resolved by Day 7 as previously observed in this model .These findings are typical of clinical ischemic hepatocellular injury and were anticipated as a result of acute hypoxia induced in liver tissue by the embolization.LC Bead LUMI™ in this study is shown to be highly biocompatible, having passed all standard ISO10993 biocompatibility tests.Histopathological analysis of the device at 14, 30 and 90 days showed a classic foreign body response with initial inflammation, fibrosis and tissue remodelling to yield complete integration of the device into the tissue with no observed chronic inflammatory response.This study also demonstrates that this device performs its intended primary function of being able to embolize arteries, leading to targeted regions of ischemia and induction of focal areas of tissue necrosis located adjacent to the beads.Features such as dilated bile ducts or gallbladder embolization were expected as a consequence of the embolization procedure, were noted in the histopathology and correlated well with features observed during the CT imaging analysis.The lack of any loss of RO Bead image intensity over the 90 day period and the completely benign response to the tissue surrounding the beads suggests no loss of the triiodinated species over this time frame in vivo, confirming that the covalent attachment of the radiopaque moiety to the bead is stable in the body.A useful aspect of this novel device is its inherent radiopacity, allowing it to be visualized on X-ray angiography during the embolization procedure, enabling the user to observe filling of target arteries and also potentially off-target arteries if reflux occurs or the catheter is misplaced .It should be noted that it is currently recommended that LC Bead LUMI™ is administered as a suspension in pure iodinated liquid contrast medium in order to better suspend the beads and allow a sense of directionality and flow velocity when injecting very small aliquots of suspension.As both the beads and contrast agent are radiopaque by virtue of the presence of iodine in their structures, they cannot be distinguished from one-another with fluoroscopy as they leave and flow away at the microcatheter tip.Once trapped in the artery, beads become more distinguishable as the liquid contrast medium washes away.As small vessels well below 0.4 mm in diameter can be clearly seen under single shot resolution then the initiation of non-target embolization may be observed early on and action taken to avoid additional off-target vessel occlusion.In some cases this early signal may be sufficient to avoid significant damage but this will be highly dependent upon the criticality of the non-target vessels embolized.Indeed, in this study a contributory cause of the early expiration of one of the animals was off-target embolization of the phrenic artery, which even though could be clearly seen on the 30 day CT images, was not recognized at the time of the procedure.In addition to X-ray, the RO Beads are also clearly visible on CT, a finding shown on the 7, 14, 30 and 90 day CT scans obtained with or without IV contrast administration as the beads are more radiodense than the IV contrast and appear brighter.The beads are best seen without IV contrast, as they are easily visible in branches of the hepatic arteries without the confounding influence of soluble contrast in the tissue and the CT imaging appearance remains consistent over the 90 day study period.This will allow the physician to know the exact position of the beads on CT scans performed shortly after embolization and on long-term follow-up CT scans which along with MRI are routinely used in clinical practice to follow progress of liver cancer patients that have been treated with transarterial embolotherapy.It is hoped that the ability to clearly identify the location of the beads during and post-delivery will enable the interventional radiologist performing embolotherapy to better identify areas of under-treatment and be more wary of potential off-target occurrences.LC Bead LUMI™ therefore represents a true step forward in the improvement of minimally-invasive image-guided transarterial embolization.
The objective of this study was to undertake a comprehensive long-term biocompatibility and imaging assessment of a new intrinsically radiopaque bead (LC Bead LUMI™) for use in transarterial embolization. The sterilized device and its extracts were subjected to the raft of ISO10993 biocompatibility tests that demonstrated safety with respect to cytotoxicity, mutagenicity, blood contact, irritation, sensitization, systemic toxicity and tissue reaction. Intra-arterial administration was performed in a swine model of hepatic arterial embolization in which 0.22–1 mL of sedimented bead volume was administered to the targeted lobe(s) of the liver. The beads could be visualized during the embolization procedure with fluoroscopy, DSA and single X-ray snapshot imaging modalities. CT imaging was performed before and 1 h after embolization and then again at 7, 14, 30 and 90 days. LC Bead LUMI™ could be clearly visualized in the hepatic arteries with or without administration of IV contrast and appeared more dense than soluble contrast agent. The CT density of the beads did not deteriorate during the 90 day evaluation period. The beads embolized predictably and effectively, resulting in areas devoid of contrast enhancement on CT imaging suggesting ischaemia-induced necrosis nearby the sites of occlusion. Instances of off target embolization were easily detected on imaging and confirmed pathologically. Histopathology revealed a classic foreign body response at 14 days, which resolved over time leading to fibrosis and eventual integration of the beads into the tissue, demonstrating excellent long-term tissue compatibility.
482
Data on recurrent somatic embryogenesis and in vitro micropropagation of Cnidium officinale Makino
The nodal explant excised aseptically from in vitro grown plant of Cnidium officinale Mokino and cultured on MS medium containing 0.5 mg L−1 BA and different concentrations of 2,4-D.An embryogenic callus was observed after 2-weeks of culture in MS medium containing 0.5 mg L−1 BA and 1.5 mg L−1 2,4-D.The obtained embryogenic callus was subcultured on respective medium after 4-weeks and somatic embryos were observed under the stereomicroscope.SE at heart shape and cotyledonary stages were pictured.Data shows that cultures failed to produce embryos on MS medium containing the lower concentration of 2,4-D in combination with 0.5 mg L−1 BA.The individual embryos at the contyledonary stage were transferred to containers filled with 30 mL MS0 medium and 100% conversion to complete plants were detected.The obtain plants proliferated on MS0 medium in a similar fashion to the mother plant.The in vitro grown C. officinale Makino was used as the source of nodal explants.These in vitro plants were maintained in Horticulture lab at Institute of agriculture science, Gyeongsang National University, Republic of Korea.The Murashige and Skoog medium without plant growth regulators was used for in vitro plants maintenance in plant growth chamber set at 24 °C/ 18 °C temperature, 16-h photoperiod provided by LED lights and 70% RH .The Murashige and Skoog medium was prepared according to the Sharif-Hossain et al. method .Briefly, MS salt was weighed and dissolved in 1000 ml distilled water.Right after mixing, 30 g sucrose was added, followed by PGRs and left for 10 min on a magnetic stirrer.Medium pH was adjusted to 5.7 with 1 N HCl or 1 M NaOH and then finally 8 g tissue culture grade agar was added as a gelling agent.The prepared MS media was sterilized by autoclaving at 121 °C and 15 psi for 20 min .The autoclaved medium containing MS salt, 2,4-D, and BA were termed as induction medium.While medium without PGRs was named as germination medium.The sterilized MS medium was poured into petri dishes inside laminar flow hood and stocked in dark for future use.C. officinale Makino shoot nodes were used as explant for cultures initiation.The excised explant from in vitro grown plant was cultured horizontally on the induction medium.Each treatment comprised of five petri plates and each plate contained five nodal explants.All cultures were kept in plant growth chamber at 24 °C/18 °C temperature, 16-h photoperiod provided by LED lights and 70% RH.After two weeks of incubation, callus formation was observed that produced somatic embryos upon subculture on the respective media.As a control PGRs free MS medium was used.The cotyledonary stage somatic embryos were isolated and transferred to germination medium.While remaining embryogenic callus masses were subcultured on MS medium containing 1.5 mg L−1 2,4-D and 0.5 mg L−1 BA for recurrent somatic embryogenesis.The number of somatic embryos were counted after every 4 weeks of subculture.The isolated well developed cotyledonary SE were transferred into containers containing 50 ml of solid MS medium without PGRs.Five SEs were placed vertically in each container and subcultured onto the fresh MS0 media for another 4 weeks.The percentage of surviving plants were counted.
Cnidium officinale Makino, a perennial herb of the family Umbelliferae, is a well-known medicinal plant in oriental medicine with antidiabetic, tumor metastatic, antiplatelet, antimicrobial and insecticidal properties. Hence, C. officinale does not produce seed the plant tissue culture is the viable alternative for its propagation. Node explant from in vitro grown C. officinale Makino was cultured on MS medium supplemented with plant growth regulators (PGRs) like 2,4-Dichlorophenoxyacetic acid (2,4-D) or/and 6-Benzylaminopurine (BA). It was aimed to investigate the optimal concentration and combination of 2,4-D and BA for somatic embryogenesis in node explant of C. officinale Makino. The embryogenic callus was induced on node explant after four weeks in MS medium containing 1.5 mg L−1 2,4-D and 0.5 mg L−1 BA. The translucent white, embryogenic callus was subcultured on the respective medium and individual well-structured somatic embryos were observed. Heart and cotyledon stage embryos were pictured under a stereomicroscope. The individual somatic embryos (SE) were transferred to MS medium without PGRs (MS0) and 100% germination was observed. Repeated subculturing of the embryogenic callus for five months resulted in recurrent somatic embryogenesis but with a gradual decline in number.
483
Guided tissue engineering for healing of cancellous and cortical bone using a combination of biomaterial based scaffolding and local bone active molecule delivery
Bone tissue engineering involves an interplay of cells, biomaterials, bone active proteins and drugs to regenerate viable bone tissue .Different cell types and healing stages orchestrate bone regeneration .Biomaterials provide initial scaffolding in a bone void, which may or may not be suitable for load bearing, depending on their inherent mechanical properties .These scaffolds also provide a template for cells to migrate onto and start the repair process.Despite large progress in scaffold development, the osteoinductivity is limited.Bone active proteins and drugs are required to provide cells with sufficient stimulus to regenerate large volumes of bone in humans and for achieving performance on par with autografts.The approved osteoinductive proteins include bone morphogenic proteins-2 and 7.After a brief successful stint in clinical application, their usage has been debated due to sub-optimal carrier systems, the use of supraphysiological doses, rebound osteoclast activity with concomitant premature resorption of bone and harmful side effects.An increase in incidence of cancer in patients treated with rhBMP-2 was reported , although a recent study analyzing a large patient population treated with rhBMP-2 has shown contradictory findings .Attempts have been made to prevent some of the side effects using bisphosphonates or anti-RANKL antibodies, focusing on silencing the osteoclast activity .We recently demonstrated that a higher volume of mineralized tissue was induced by local co-delivery of BMP-2 and a bisphosphonate, zoledronic acid using a porous biomaterial, and also allowed us to reduce the minimal effective local rhBMP-2 dose .In recent years, metaphyseal bone defects have gained interest as a preclinical research focus especially due to bone cavities caused by resection of malignant tumors and infections.In a tibial metaphyseal defect in rats, we used a biphasic, microporous, slow release, calcium sulphate/hydroxyapatite biomaterial to locally deliver ZA alone or in combination with rhBMP-2 .While we could demonstrate significant cancellous bone regeneration using only ZA or by the combination of ZA and rhBMP-2, we somewhat surprisingly encountered delay in callus formation and bridging of the cortical defect.Several studies have indicated that scaffolds can be used to guide tissue regeneration, based on their shape, surface and chemical properties .We recently developed a macroporous pre-set composite biomaterial from crosslinked gelatin-CaS-HA and evaluated the in-vitro and in-vivo carrier properties .Furthermore, a collagen membrane scaffold that shares similar biochemical profiles to a previously reported collagen membrane was developed to guide cortical bone formation .In the present study, we aimed to evaluate whether the combination of these two materials delivering bone active molecules, could guide both cancellous and cortical bone regeneration in a metaphyseal defect model in rats.The specific aims were: 1) To evaluate if a CM can be used to deliver rhBMP-2 or a combination of rhBMP-2 and ZA in a previously described ectopic muscle pouch model and 2) To evaluate the potential of Gel-CaS-HA in locally delivering ZA and rhBMP-2 in the medullary compartment for guiding cancellous bone regeneration with or without an endosteal cover in the form of a CM delivering low dose rhBMP-2, to guide cortical regeneration in a circular tibial metaphyseal defect in rats.We hypothesized that the Gel-CaS-HA delivering ZA and ZA + rhBMP-2 in the medullary compartment was able to stimulate MSCs to induce cancellous bone regeneration.The CM covering the defect would prevent the Gel-CaS-HA material from protruding into the cortical defect and release low dose BMP-2 into the surrounding tissue and stimulate muscle progenitor/stem cells to induce cortical bone healing.rhBMP-2 from the Infuse® Bone Graft kit, ZA, pentobarbital sodium, diazepam ketamine hydrochloride, xylazine hydrochloride and buprenorphine were purchased from the pharmacy.Collagen Membrane was kindly provided by Ortho Cell Australia.Male Sprague-Dawley rats were purchased from Taconic.The study was divided in following sections based on the aims of the study: 1) Analyze the feasibility of a collagen membrane in delivering rhBMP-2 and ZA in an ectopic muscle pouch model, 2) Evaluate the carrier properties of a supermacroporous biomaterial in delivering locally rhBMP-2 and ZA in a metaphyseal bone defect with an aim to guide cancellous bone regeneration.3) Evaluate the potential of the collagen membrane delivering low dose rhBMP-2 to guide cortical bone defect healing in the same defect model.In this study we evaluated two biomaterials.The first biomaterial consists of a collagen membrane developed at the University of Western Australia .A detailed description of the fabrication process of the collagen membrane is provided elsewhere .Briefly, porcine connective tissue rich in collagen type 1 was cleaned of the fat, followed by denaturing the non-collagenous proteins using a mixture of 1% sodium dodecyl sulphate and 0.2% LiCl overnight at 4 °C.The resulting tissue was processed in 0.5% HCl solution for 30 min to denature the collagen, washed with deionized water and neutralized with 0.5% NaOH solution.This collagen matrix was then subjected to mechanical stretching to reach desired dimensions, structure and alignment of the fibers.The tissue was immersed in a solution of 1% HCl to ensure complete denaturation for 1 day.A dry CM was obtained by briefly treating it with acetone and air drying.At the end of the process, a CM with a thickness range of 200–400 μm was obtained.The structure of the CM was visualized using micro-CT imaging with contrast enhancement using 10% potassium iodine solution for 10 min.The CM was further characterized using scanning electron microscopy by sputter coating the CM with platinum .The membrane contains one rough side with randomly distributed collagen bundles that form a rough/porous structure, which enable cell attachment and settlement.The membrane further contains a smooth side consisting of aligned collagen bundles forming a knitted structure.The application of the CM in bone as well as its carrier properties in delivering rhBMP-2 and ZA has not been tested before.The second biomaterial is a supermacroporous cryogel consisting of crosslinked gelatin-CaS-HA prepared via cryogelation .Preparation, characterization and carrier properties of this biomaterial have been described elsewhere .Briefly, the material is fabricated by mixing the polymeric and inorganic components to form a slurry, after which a crosslinker is added and the polymerization occurs at sub-zero temperatures .After an incubation period of 12 h under cryo conditions, the crosslinked matrix is thawed at room temperature followed by re-freezing and freeze-drying.This process produces a spongy scaffold with a porous structure ranging from a few microns to approximately 100 μm, as shown earlier with SEM .The delivery of rhBMP-2 and ZA via the Gel-CaS-HA scaffold has been described earlier in an extra-osseous, muscle pouch model but local delivery of bone active molecules in a bone defect has not been performed to date.Pre-sterilized circular pieces of CM were cut with a biopsy punch.The animals were divided into following groups: 1.CM with saline, 2.CM containing 10 μg rhBMP-2) and 3.CM containing 10 μg rhBMP-2 and 10 μg ZA + ZA.In the CM + rhBMP-2 group, a total of 60 μg rhBMP-2 was reconstituted in 75 μL saline to a concentration of 0.8 mg/mL.From the stock solution, 12.5 μL of the saline containing 10 μg rhBMP-2 was pipetted on each piece of the membrane.In group 3 with rhBMP-2 and ZA, a total of 60 μg rhBMP-2 was solubilized in 75 μL of ZA to a concentration of 0.8 mg/mL.A total of 12.5 μL of this solution containing 10 μg rhBMP-2 and 10 μg ZA was pipetted on each membrane.Scaffolds were incubated with the rhBMP-2 and rhBMP-2+ZA solution for at least 30 min at room temperature to allow for homogenous soaking of the material prior to implantation.Samples belonging to the only CM group were incubated with 12.5 μL of saline.The volume of the liquid pipetted on the membranes in all groups was just enough to cover the membranes without overflowing.Ten male Sprague-Dawley rats with average weight of 351 ± 9 g were used.Animals were anaesthetized using a cocktail of pentobarbital sodium and diazepam administered via the intra peritoneal route.A midline abdominal incision approximately 1.5 cm long was made and a muscle pocket in the rectus abdominis on each side of the midline separated by a minimum distance of 1.5 cm was created using scalpels.Five animals received CM alone in the left pocket and CM + rhBMP-2 in the right pocket.Likewise, five more animals were implanted with CM alone in the left pocket and CM + rhBMP-2 + ZA in the right pocket.All membranes were implanted in a flat fashion within the muscle pouch.The muscle and skin wounds were closed using non-resorbable sutures and animals had free access to food pellets and water immediately post-operation through the duration of the experiment.Animal sacrifice was performed using CO2 asphyxiation 4-weeks post implantation.Harvested specimens were cleaned of surrounding muscle tissue, wrapped in saline soaked gauze and stored in 5 mL Eppendorf tubes followed by radiography and micro-CT imaging on the same day.Gel-CaS-HA scaffolds were cut into cylinders measuring 4 mm in diameter and 3 mm in height.This was done by cutting the scaffold into cylinders of 3 mm height using a sterile surgical blade following which a biopsy punch with a diameter of 4 mm was used to obtain a cylinder with 4 mm diameter and 3 mm height.Scaffold sterilization was performed by incubating the scaffolds in 70% EtOH overnight followed by two quick changes of 99.5% EtOH for 20 min each.Scaffolds were then air dried in a laminar hood.Pre-sterilized circular pieces of CM measuring 6 mm in diameter were cut using a biopsy punch.Immobilization of the Gel-CaS-HA and CM materials with bone active molecules was performed as per the dosages specified in Table 1.In G1, the defect was left untreated.In G2, the Gel-CaS-HA scaffold was incubated with 20 μL saline for at least 30 min before implantation.In G3, 100 μg ZA contained in 125 μL saline was mixed with 75 μL saline and 20 μL of this solution containing 10 μg ZA was pipetted on each of the Gel-CaS-HA scaffolds.In G4, 50 μg rhBMP-2 was reconstituted in 125 μL of ZA solution containing 100 μg ZA following which 75 μL saline was added to the solution.Using this solution, 20 μL of the mixture containing 5 μg rhBMP-2 and 10 μg ZA were pipetted on each scaffold.In groups G5 and G6, similar procedure followed during preparation of groups G3 and G4 were repeated, respectively with the only difference that a CM incubated with 10 μL saline was applied as an endosteal cover.In G7, the Gel-Cas-HA scaffold was prepared by following the same steps described during the preparation of scaffolds in G3.Additionally, the CM pieces were incubated with rhBMP-2 solution.This solution was prepared by solubilizing 20 μg rhBMP-2 in 100 μL saline.Each CM was incubated with 10 μL of this solution containing 2 μg rhBMP-2.In G8, 30 μg rhBMP-2 was suspended in 125 μL ZA solution and further diluted with 75 μL saline.From this solution, 20 μL containing 3 μg rhBMP-2 and 10 μg ZA were pipetted on each Gel-CaS-HA scaffold.To combine the CM with rhBMP-2 in G8, the same steps described for functionalizing the CM in G7 were followed.No overflow of the bioactive molecule containing solution occurred during the immobilization process.After the addition of bioactive molecules, both Gel-CaS-HA and CM were incubated with the additives at room temperature for at least 30 min before implantation.Total number of experimental groups with sample size/group is also mentioned in Table 1.A total of 82 male Sprague-Dawley rats with an average weight of 510 ± 16 g were used for the tibia defect model.Animals were anaesthetized using a cocktail of ketamine and xylazine by intra peritoneal administration.The surgical procedure was similar to what has already been described by Horstmann et al. .Briefly, the right knee was shaved and sterilized using chlorohexidine EtOH.A skin incision measuring approximately 1 cm was made medially at the proximal tibia starting at the knee joint.Small layer of muscle was scraped using scalpels after which the periosteum was rigorously scraped in both proximal and distal directions to expose the flat surface of the tibia.Drilling was performed near the insert of the medial collateral ligament.Using a handheld drilling burr, the cortical bone and the underlying cancellous bone was drilled until the posterior cortex was reached.This gave a circular cortical defect of 4.5 mm in diameter extending 3 mm downwards into the cancellous bone.The wound was cleaned using sterile gauze and either left empty or filled with Gel-CaS-HA based on the groups described in Table 1 and the whole experiment is schematically presented in Fig. 2.The diameter of the Gel-CaS-HA was intentionally kept smaller than the defect diameter in order to allow for the swelling of the biomaterial and to ensure the biomaterial would fit in the defect without any micro-structural damage that would affect the porous structure of the material.In the groups involving CM, once the cancellous defect was filled with Gel-CaS-HA and bioactive molecules, a pre-cut piece of CM with or without rhBMP-2 was placed on the defect with smooth side facing the Gel-CaS-HA in the marrow cavity and rough side facing the muscle.The CM was carefully packed under the endosteum in a circular fashion using a blunt, flat end elevator ensuring that the Gel-CaS-HA did not protrude outwards into the cortical defect.At this stage, the muscle wound was closed using a single resorbable suture and the skin incision was closed using single mattress sutures.After 8-weeks of healing time, the animals were sacrificed using CO2 asphyxiation.The radiographs of the CM specimens harvested from the muscle pouch after 4-weeks of implantation were obtained using the scout view of a micro-CT scanner.Subsequently, samples were scanned in the same micro-CT instrument.Images were reconstructed using a RAMLAK filter with 100% cut-off.In the tibia defect model, the right proximal tibiae were harvested at 8-weeks post defect creation and subjected to micro-CT imaging with the same settings as above, but with 480 projections.The images were analyzed to quantify the extent of bone formation in the defects as described below.DICOM images were converted to bitmap images using imageJ followed by importing the images to CTAn.Due to the ectopic location of the CM in the muscle pouch, the entire volume of the harvested specimen above a predefined grayscale threshold was considered as newly mineralized tissue for quantification.The threshold was defined based on visual inspection.Bone volume was used as an outcome variable.Images from all samples were aligned in Dataviewer and imported to CTAn for further analysis.Three separate regions of interest were defined for the analysis of new bone formation in the tibia defect model.ROI1 consisted of a conically shaped ROI in coronal view, starting with 4.5 mm diameter at the bottom of the old cortex, extending down into the defect for 2 mm with the smallest diameter at the bottom being 1.5 mm.Due to the triangular anatomy of the tibia, ROI 1 was dynamic with varying diameters at the top and bottom to avoid including old cortical bone in the analysis and only study bone formed within the defect.ROI2, was a 4.5 mm diameter circle starting from the bottom of the old cortex and extending outwards to quantify the cortical bone/callus formed in the cortical defect.The height of ROI2 was variable to include all newly mineralized tissue formed in the cortical/callus region.ROI3 included the full extent of bone formation proximal and distal to the implanted scaffold.Besides, ROI3 included newly formed trabecular and cortical bone as well as the original bone.A square shaped ROI measuring 8 mm × 8 mm was drawn in the trans-axial view and the height of the ROI was chosen to be 3.25 mm proximal and 3.25 mm distal from the middle of the defect.Thresholding was set at 100–255 in all images, as determined by visual inspection.Bone volume/Tissue volume was used as an outcome variable in ROI1, whereas Bone volume was used as an outcome variable for ROI2 and ROI3 due to variable tissue volume.An orthopedic surgeon assessed healing of the cortical defect and defects were classified as bridged or not bridged .Assessment was performed based on the micro-CT images from approximately the middle of the defect in the sagittal and trans-axial view where the defect width was largest.Muscle pouch specimens and the tibia specimens were fixed in a pH-neutral 4% formaldehyde solution for 24 h. Following fixation, samples were placed in a 10% Ethylenediaminetetraacetic acid solution buffered to pH 7.3–7.4 for 2-weeks for the muscle pouch samples and 5 weeks for the tibia defect samples with regular replenishments of EDTA solution every 3rd day.Some samples from the tibia defect study were not decalcified by 5-weeks, due to which, a second decalcification step was added wherein the EDTA-solution was removed, samples washed in deionized water and then placed in a 5% formic acid solution for 24 h.Once the decalcification was deemed complete by physically testing the samples, specimens were washed for 24 h in deionized water.Dehydration of the samples was performed using an increasing EtOH gradient ranging between 70 and 99%, followed by xylene treatment.Finally, samples were embedded in paraffin using routine procedures.Paraffin-embedded tissue samples were sectioned to 5 μm thickness using a microtome.Sections were collected on super frost microscope slides, allowed to attach to glass slides on a slide warmer for at least 1 h followed by incubation at room temperature for at least 24 h. Sections were then deparaffinized and rehydrated using standard procedures with a decreasing EtOH gradient, followed by staining with hematoxylin and eosin, dehydrated again, cleared in xylene and mounted."Additionally, in the tibia defect study, collagen matrix staining using Picrosirius red was performed per the manufacturer's protocol.Power calculations for estimation of group sizes were based on our previous studies, which included comparisons between multiple treatment groups .Data is presented as mean ± standard deviation unless otherwise stated.Micro-CT data from the tibia defect model were first evaluated for normality using Shapiro-Wilk test on the residuals and the distribution of the residuals."Normally distributed data were tested using ANOVA with Tukey post-hoc or Games-Howell post-hoc test based on the homogeneity of variances .When data were not normally distributed, Kruskal-Wallis multi sample test was used.Micro-CT data from the abdominal muscle pouch model comparing only two groups were tested using Mann-Whitney U test, where a non-parametric test was chosen due to the small sample size.Both the abdominal muscle pouch model and the tibia defect model were approved by the Swedish board of agriculture.Animals had free access to regular food pellets and water throughout the duration of the experiments.Animals were housed two/cage with 12 h light/darkness cycles.This experiment evaluated if a CM can be used to deliver rhBMP-2 and ZA and subsequently lead to bone formation in an ectopic muscle pouch model.Radiographs taken at the time of micro-CT scanning showed no radiolucency in the CM alone group.Identifying them in the muscle pouch after 4-weeks was also difficult.CM + rhBMP-2 group showed scattered radiolucency in different regions of the material and prominent radiolucency towards the edges.CM + rhBMP-2+ZA group exhibited highest radiolucency throughout the volume of the specimens.Samples that exhibited radiographic bone formation were further scanned using a micro-CT scanner and CM + rhBMP-2+ZA group produced significantly higher bone volume compared to the specimens in the CM + rhBMP-2 group.Histologically, CM alone did not show any bone formation and the scaffold was infiltrated with fibrous-like tissue at the extremities.In the CM + rhBMP-2 group, prominent bone formation was observed on the edges.The specimens were predominantly filled with marrow like tissue and small amounts of bone towards the middle of the specimen.Co-delivery of rhBMP-2 and ZA with the CM exhibited significant bone formation both at the sides as well as towards the middle of the specimens.Remnants of the CM were observed in all tested groups.The three groups were used to test the ability of Gel-CaS-HA alone or immobilized with ZA or rhBMP-2+ ZA in regenerating cancellous bone in a tibia defect model and compare it with the empty defect.In the defect ROI, all Gel-CaS-HA scaffold treated groups, irrespective of the addition of ZA or rhBMP-2+ZA, showed significantly higher BV/TV when compared to the empty group wherein the defect was drilled and left empty to heal.In the cortical ROI i.e. new bone formation in the cortical defect, groups G1-G4, G3 and G4 exhibited significantly higher BV compared to the empty group and G4 also had significantly higher BV compared to G2.In ROI3 i.e. full 6.5 mm bone ROI, BV was significantly higher in the scaffold groups immobilized with ZA and rhBMP-2+ZA when compared to G1.G4 also had significantly higher BV than scaffold alone.No significant differences were seen between G1 and G2 as well as between G3 and G2.Representative images of the extent of cortical healing from each group is shown in Fig. 6.In the empty group, all cortices healed with a thin neo-cortex.Addition of Gel-CaS-HA in the medullary compartment impaired cortical healing and the scaffolds were protruding outwards into the cortical ends in several specimens.In the empty group, no signs of new bone were seen in the medullary cavity.The defect was filled with marrow like tissue.On the cortical side, a thin neo-cortex was formed, which covered the entire width of the defect.In G2, new cancellous bone was seen in close proximity to the scaffold.Bone was also observed within the pores of the scaffold.Some new cortical bone covered the cortical defect, however a large portion of the cortical defect was not healed and was filled with fibrous tissue like structures.Remnants of the scaffold in the cancellous defect were also visible.In G3, the medullary compartment was filled with large amounts of trabecular bone especially around the scaffold.Most of the new bone was seen in close proximity to the scaffold edges regenerating outwards from the scaffold.Large parts of the scaffold remained un-resorbed.Similar to G2, the cortical defect in G3 also remained unbridged with only some cortical regeneration.Similar to G3, the extent of cancellous bone regeneration in the medullary compartment was higher in G4 compared to G1 and G2.The marrow cavity was filled with new cancellous bone proximal and distal to the scaffold and remnants of scaffold were visible.Incomplete bridging of the cortex was seen in the representative histological images.In terms of collagen matrix deposition, the empty control group showed no collagen deposition within the defect or regions surrounding the defect but the cortical defect was bridged and rich in collagen.G2 exhibited slight amounts of collagen deposited in and around the scaffold.Both groups G3 and G4 had abundant collagen deposition in the regions that exhibited new bone formation in Fig. 7.All gel-CaS-HA groups demonstrated impaired cortical healing.Groups G5-G8 were used to evaluate whether a CM alone or immobilized with low dose rhBMP-2 could aid in restoring cortical bone regeneration, which was not seen in G2-G4.In the defect, the BV/TV was significantly higher in all Gel-CaS-HA and CM treated groups compared to the empty group.No differences between G2-G8 were seen.In the cortical ROI, the BV was significantly higher in G6 and G7 when compared to G1 and G2.In ROI3, the BV fraction was significantly higher in groups G5-G8 when compared to the empty group.G5, G6 and G8 also had significantly higher BV when compared to G2.Fig. 9, shows representative micro-CT slices from G5-G8 showing the extent of cortical healing in each group and the total number of cortices healed/treatment group.It was evident that in the groups where rhBMP-2 was added on the CM, a greater number of cortices healed with a prominent bony callus bulging outwards compared to the empty control and the rest of the groups treated with only Gel-CaS-HA.Addition of only CM to the ZA treated Gel-CaS-HA appeared to have a radio dense mineral precipitation in the region where the membrane was originally placed but cortical bridging was not achieved in that group.In groups G6-G8, a dual response was seen on the cortex with radio dense mineral precipitation in the areas where the CM was originally applied as well as an outer cortical shell, which bridged the entire defect in several cases.The cancellous bone regeneration in G5-G8 treated groups was similar to G3 and G4.Large amount of new trabecular bone was seen growing around the scaffold.Large parts of the scaffold remained un-resorbed.In G5 and G6, the cortical defect was not completely bridged but cortical regeneration in close proximity to the damaged cortical ends was seen creeping towards the center of the defect.In the representative images in Fig. 10, G7 and G8 showed completely bridged cortical defects as indicated by the yellow # symbols.In all CM treated groups i.e. G5-G8, remnants of the CM covering the defect were not seen indicating a possible resorption of the membrane after 8-weeks.All CM treated groups with Gel-CaS-HA and ZA or Gel-CaS-HA and ZA + rhBMP-2 indicated abundant collagen deposition in the cancellous defect, predominantly in the outer regions of the scaffold and its surroundings.Complete cortical bridging with uniform picrosirius red staining in the cortical region of representative images from groups G7 and G8 was observed.Experimentally, rhBMP-2 till date has remained as the most potent osteoinductive molecule capable of inducing bone formation in various anatomical sites including non-osseous sites like in the muscle .It has never proven superior to autograft in randomized clinical studies regarding the rate of healing.The only FDA approved device for the delivery of rhBMP-2 today is the Medtronic® absorbable collagen sponge.A comparison of the in-vivo release kinetics of rhBMP-2 from the ACS with other recently developed biomaterials including Gel-CaS-HA, showed more sustained release of the protein by the latter .With improved biomaterials as carriers, we might be able to show an improved effect of BMP-2 over autografts, reducing the genuine shortage of autografts available at surgery .This is paving way for the desired off-the-shelf tools in the form of biomaterial carriers that can efficiently deliver rhBMP-2.While one of the reasons for the underperformance of rhBMP-2 can be attributed to its carrier, also the supraphysiological doses used clinically induce osteoclast formation and thereby premature bone resorption leading to an overall reduced net bone formation.Several studies have shown that by combining BMP with systemic or local ZA treatment , it is possible to hinder the excessive osteoclast activity thereby maintaining an increased net bone turnover.In a previous study using the Gel-CaS-HA in an abdominal muscle pouch model, we have shown that co-delivery of rhBMP-2 and ZA via the Gel-CaS-HA can reduced the effective rhBMP-2 doses by 4 times.TRAP staining confirmed the hypothesis and indicated reduction in the osteoclast associated TRAP activity in the rhBMP-2+ZA group .This study used two distinct biomaterials delivering bone active molecules to guide cancellous and cortical bone regeneration.The doses for rhBMP-2 and ZA in the abdominal muscle pouch model and the tibia defect model in this study were taken from previously published studies by our group .These are in line with other reports in experimental bone healing models in rats, including the abdominal muscle pouch model and a critical sized femoral diaphysis defect model with locally delivered rhBMP-2.Zara et al. reported local rhBMP-2 doses of 22.5 μg/animal or higher led to local inflammatory reaction and cyst formation in a femoral defect model .Regarding local ZA delivery, Perdikouri and co-workers recently used a femoral condyle defect in rats and treated it with increasing concentration of local ZA delivered via a biphasic CaS-HA biomaterial .They reported a reduction in bone mineral density in the defect area with an increase in local ZA doses.Belfrage et al., reported similar findings in a bone chamber model in rats .These results suggest that local ZA delivery at higher doses can have a negative impact on bone formation.Whether it is possible to further reduce the doses of rhBMP-2 and ZA is only speculative at the moment.Taken together, the literature suggests that too high local doses of rhBMP-2 and ZA can potentially have negative effects on bone formation and lowering the doses further should be pursued.The experimental groups in the abdominal muscle pouch model were based on previous studies and CM + ZA group was not included.Earlier data showed that local delivery of ZA in an ectopic model of bone formation in rats did not induce bone.In the tibia defect model, a group with Gel-CaS-HA + rhBMP-2 alone was not included since rhBMP-2 induces osteoclastogenesis and always leads to less bone formation compared to rhBMP-2+ZA .Although, results from the muscle pouch study indicated that CM + rhBMP-2+ZA regenerates significantly higher amount of bone compared to the CM + rhBMP-2 group, CM + rhBMP-2 groups were chosen instead for the tibia defect study.The tibia defect model used in this study was a follow up of an earlier study wherein it was noted that cortical healing was impaired when the cancellous cavity was filled with a CaS/HA biomaterial with bioactive molecules .Based on the results from that study, it was also noted that out of all treatment groups, CaS/HA + ZA was the only group with no specimen showing complete cortical bridging.It was hypothesized that ZA might not have a similar anabolic effect on cortical bone as compared to the cancellous bone and thus only CM + rhBMP-2 groups were used for cortical bone regeneration in G7 and G8.In the first part of the study, we used an established abdominal muscle pouch model to perform a feasibility analysis of the collagen membrane and its carrier properties in-vivo.The ectopic muscle model is an efficient model to study the osteoinductive properties of a carrier biomaterial due to its extraosseous location .The muscle pouch results indicated that the combination of CM with rhBMP-2+ZA regenerated significantly higher bone volume when compared to CM + rhBMP-2 group without a bisphosphonate.This could be attributed to the excessive osteoclastogenesis by rhBMP-2 .As expected, CM alone did not show any radiographic signs of bone formation at all.Osteoinduction of biomaterials has been shown to occur in other large animal models without bioactive molecules but chemical signals from calcium phosphates either embedded in the biomaterials or precipitated on the biomaterial surface in-vivo appears to be necessary along with other physico-chemical properties like porous structure and surface properties .Most previous animal models used to study fracture healing are diaphyseal models dealing with non-unions or critical defects.However, metaphyseal bone regeneration differs from diaphyseal.Focusing on other indications like subchondral fractures, aseptic prosthetic loosening and bone loss after debridement of infections or tumors has led to an increased interest in metaphyseal bone regeneration as a different entity to diaphyseal.The metaphyseal bone is highly vascularized and contains a rich stem cell source.The tibia defect model described in this study has previously been used to evaluate the bone forming potential of a ceramic biomaterial consisting of CaS/HA .We have earlier reported that local delivery of ZA and ZA + rhBMP-2 using a microporous CaS/HA ceramic biomaterial induced significantly higher volume of mineralized tissue in the defect when compared to empty group, allograft group and CaS/HA material alone .Though the defect dimensions in the earlier study and the present study are different, none of the defects created in the two studies appeared to be critical defects, since the majority of the cortices in the empty group healed in both studies.Almost no cancellous bone regeneration was seen in the empty group emphasizing the need of scaffolding for bone tissue regeneration.It should be noted that the cortical healing was impaired in both studies, irrespective of the type of biomaterial used or whether bioactive molecules were added.We speculate that the biomaterials protrude out through the cortical defect thereby hindering the damaged cortical edges from completely bridging within the time frame of the study.The BV/TV in the defect ROI in G2-G4 was significantly higher than the empty group indicating that the addition of ZA or rhBMP-2 and ZA thus does not have an effect on the ingrowth of bone into the scaffold.This could be because most of the new bone formed due to the addition of ZA, and rhBMP-2+ZA occurred outwards from the scaffolds, which is measured in ROI3.The effect of the addition of ZA and rhBMP-2+ZA was more prominent in ROI3 as seen both via micro-CT and representative histology images.The addition of ZA alone induced significant bone formation around the scaffold, indicating a possible anabolic role of ZA, which challenges the notion of ZA only being anti-catabolic .With impaired cortical healing using Gel-CaS-HA from the first part of the study, we sought to guide cortical tissue regeneration with the aid of a CM placed endosteally at the defect site.The hypothesis was to use the CM as a barrier to prevent the scaffold from protruding outwards into the cortical defect and simultaneously provide a template for stem cells to populate the cortical defect.The release of rhBMP-2 would further accelerate the differentiation process of stem cells into osteoblast like cells thereby bridging the cortical defect.In G5 and G6, the CM prevented the Gel-CaS-HA scaffolds from bulging out into the defect.In the area originally covered with the membrane, a radio dense mineral precipitate forming a white rim was seen in the micro-CT images.Since histology was done on decalcified specimens, it was not possible to characterize the type of mineral deposition, although it was evident from histology that the white rim seen in the micro-CT image was not cortical bone.We could not identify the collagen membrane after 8-weeks, which might imply that the membrane leads to mineral precipitation early on but gets resorbed at a later time.Addition of rhBMP-2 to the CM in G7 and G8 led to complete cortical bridging seen as a bulging callus covering the entire cortical defect in 50% and 70% of specimens, respectively.Callus formation and cortical bridging are important aspects of indirect bone healing necessary to achieve complete repair .Noteworthy is that while micro-CT results from the cortical ROI indicated some mineralization in the membrane only groups, these groups did not score equally high as G8 in the visual assessment of cortical healing.This is most likely because the grayscale values in the micro-CT images are too similar to differentiate between low-density mineral precipitations on the CM from viable cortical bone.Furthermore, the large spread of data in G8 could also have contributed to these results.Significant radiological and histological bone formation was noticed in all bioactive molecule treated groups.The Gel-CaS-HA biomaterial provides a spatio-temporal release of rhBMP-2 and ZA in-vivo and releases approximately 65% of rhBMP-2 over a period of 4-weeks, thereby providing a continuous supply of the osteoinductive molecule during the initial weeks of bone repair .In terms of ZA delivery, the biomaterial released nearly 40% ZA on day 1 post implantation with almost no further release of ZA.rhBMP-2 delivery is necessary for osteogenic differentiation of progenitor cells and the constant presence of ZA is necessary for preserving the new bone formation.This is verified by significant bone formation in all bone active molecule treated groups.Bone was predominantly seen around the scaffold rather than in the middle of the defect, which was still covered by the unresorbed Gel-CaS-HA scaffold.Approximately 50% of the Gel-CaS-HA scaffold degraded after two months as seen from the in-vitro degradation experiment earlier .It can be inferred that due to the slow progression of bone into the pores of the scaffold, bone preferentially grew in the regions around the scaffold.Maybe the resorption rate of the Gel-CaS-HA scaffold could be modulated further to achieve even more cancellous bone regeneration within the original defect.It must however be noted that most biomaterials are pre-tested in rodent models of bone healing, which have a faster healing rate than humans , and a fast resorbing biomaterial in rodent models might not necessarily be optimal for human use.The cells responsible for healing of bone originate from various sources including the marrow canal, periosteum, skeletal muscle and blood vessels .The cancellous bone regeneration especially in G3-G8 i.e. rhBMP-2 and ZA treated groups, could possibly be due to the BMP based recruitment and stimulation of stem cells from the medullary canal including the endosteum also shown by Yu et al., earlier .ZA at low doses is also shown to induce osteogenic differentiation of mesenchymal stem cells .Taken together, this justifies the source of cells for cancellous bone regeneration in this study.Cortical healing is more complex and depends on several factors including mechanical stability of the bone and availability of healthy periosteum.Yu et al. reported that periosteal stem cells are a strong target for BMP-2 to form the fracture callus.Another study from Liu et al. suggested the role of myogenic cells from the Myo D lineage significantly contributes to fracture healing in an open fracture model with intentionally damaged periosteum .In the present study, we scraped the periosteum both proximal and distal to the defect following which a hole was drilled into the bone.Many cortices did heal in the groups where CM with rhBMP-2 was used, and we speculate that the collagen membrane acted as a physical barrier for stem cells from the marrow cavity to migrate to the membrane and guide cortical healing.This is because the membrane is non-permeable to cells at least during the early phase before being degraded.Furthermore, due to a physically damaged periosteum, we speculate that the cortical healing seen in G7 and G8, with rhBMP-2 delivered via the CM, is due to muscle derived stem cell population.Muscle cells possess BMP receptors and respond to BMP treatment .In a compartmental defect in mice, cortical healing was impaired without the presence of MSCs from the marrow compartment and the defect was instead replaced by scar tissue .This contradicts our findings and we speculate that cortical healing in G7 and G8 is muscle mediated and the defect used in our study is also somewhat mechanically stable.Lineage tracking studies are necessary to elucidate the cellular composition of both cancellous and cortical bone regenerated in this study.In the tibia defect model, we did not use a group wherein only CM was used to cover the defect without the presence of Gel-CaS-HA underneath.While the addition of this group could show the ability of the CM alone in healing of the defect, it was surgically difficult to ensure a firm endosteal placement of the membrane without a support from underneath.Also, this study was carried out using a specific collagen membrane and we did not compare the developed Gel-CaS-HA with the FDA approved absorbable collagen sponge in this model.However, their potential in locally delivering rhBMP-2 and inducing bone formation in the abdominal pouch model has been compared earlier .In an ectopic muscle pouch model, we established that the dual delivery of rhBMP-2 and ZA via a collagen membrane regenerates higher bone volume in comparison to delivering rhBMP-2 alone.Secondly, a macroporous Gel-CaS-HA scaffold can be used to deliver ZA or ZA + rhBMP-2 for cancellous bone regeneration in a metaphyseal defect in the tibia.Addition of rhBMP-2 to the ZA in the scaffold does not provide an additive effect in cancellous bone regeneration.Addition of Gel-CaS-HA scaffold to the cancellous defect can impair cortical healing despite the addition of bone active molecules.In the empty group, all cortices healed but no cancellous bone regeneration was seen.It can thus be inferred that scaffolding and bone active molecule delivery is critical for cancellous bone formation.This holds true for cancellous bone regeneration but is not necessarily applicable for cortical bone regeneration.Protrusion of the Gel-CaS-HA scaffold through the cortical bone is suspected to interfere with the normal cortical bone repair process.The results indicated that a barrier in the form of a CM, delivering low dose rhBMP-2, at the endosteum and cortical bone interface at the defect site significantly enhances the cortical healing.Thus, we show a promising approach of combining a porous Gel-CaS-HA scaffold and a CM loaded with bone active molecules for guiding cancellous and cortical bone repair, respectively.This strategy could be translated into the clinical setting for improved treatment in patients with metaphyseal bone defects.MT, LL, MHZ, AK, HI, DBR and DL designed the study.AK, IQ and DBR did the Gel-CaS-HA fabrication.MHZ was involved with the fabrication of CM.MT, DBR and DL performed animal surgeries.DL, DBR and HI performed micro-CT imaging, analysis pipeline and image analysis.DBR and DL contributed to histology.DBR wrote the first draft of the manuscript and all named authors contributed in revising the manuscript.LL is a board member of Bone Support AB, Lund, Sweden.LL and MHZ are board members of Ortho Cell, Australia.All other authors have nothing to disclose.The data associated with this manuscript is available upon request to the corresponding author or the senior author.
A metaphyseal bone defect due to infection, tumor or fracture leads to loss of cancellous and cortical bone. An animal model separating the cancellous and cortical healing was used with a combination of a macroporous gelatin-calcium sulphate-hydroxyapatite (Gel-CaS-HA) biomaterial as a cancellous defect filler, and a thin collagen membrane (CM) guiding cortical bone regeneration. The membrane was immobilized with bone morphogenic protein-2 (rhBMP-2) to enhance the osteoinductive properties. The Gel-CaS-HA cancellous defect filler contained both rhBMP-2 and a bisphosphonate, (zoledronate = ZA) to prevent premature callus resorption induced by the pro-osteoclast effect of rhBMP-2 alone. In the first part of the study, the CM delivering both rhBMP-2 and ZA was tested in a muscle pouch model in rats and the co-delivery of rhBMP-2 and ZA via the CM resulted in higher amounts of bone compared to rhBMP-2 alone. Secondly, an established tibia defect model in rats was used to study cortical and cancellous bone regeneration. The defect was left empty, filled with Gel-CaS-HA alone, Gel-CaS-HA immobilized with ZA or Gel-CaS-HA immobilized with rhBMP-2+ZA. Functionalization of the Gel-CaS-HA scaffold with bioactive molecules produced significantly more bone in the cancellous defect and its surroundings but cortical defect healing was delayed likely due to the protrusion of the Gel-CaS-HA into the cortical bone. To guide cortical regeneration, the cortical defect was sealed endosteally by a CM with or without rhBMP-2. Subsequently, the cancellous defect was filled with Gel-CaS-HA containing ZA and rhBMP-2+ZA. In the groups where the CM was doped with rhBMP-2, significantly higher number of cortices bridged. The approach to guide cancellous as well as cortical bone regeneration separately in a metaphyseal defect using two bioactive molecule immobilized biomaterials is promising and could improve the clinical care of patients with metaphyseal defects.
484
Oil quality of pistachios (Pistacia vera L.) grown in East Azarbaijan, Iran
Pistachio nut is one of the most popular tree nuts all over the world.This nut is widely consumed raw or toasted and salted and is used as an ingredient of many kinds of foods .Pistachio nut has peculiar organoleptic characteristics and high nutritional value.It is a rich source of fat and is high in mono unsaturated fatty acids.Moreover, it is a good source of proteins, minerals and bioactive components .The pistachio kernel oil has considerable amounts of oleic, linoleic and linolenic acids , which have important therapeutic properties, such as reducing triacylglycerols, low-density cholesterol, total cholesterol and the glycemic index .Pistachio oil contains bioactive and health promoting compounds such as tocopherols, sterols and phenolic compounds .Numerous studies conducted to investigate the characteristics of pistachios with different geographic origins showed that pistachio oil composition varied in different varieties and climate conditions .Iran is one of the most significant pistachio producers and exporters in the world .In 2016–17, Iran produced >250,000 t pistachio nuts .Therefore, several studies have been done on the characteristics of Iranian pistachios .Drought is an obvious environmental situation in East Azarbaijan.Scientists believe that the cultivation pattern should be compatible with the environment.Planting crops that do not require much water can significantly reduce the harmful effects of drought.Pistachio grows naturally in most of the saline and dry areas, and next to dates, is considered to be the most resistant tree to salinity.Therefore, the northwest of Iran is suitable for pistachio cultivation.The annual production of 139 tons of pistachios in East Azarbaijan indicates the potential of the region to grow the nut.To the best of our knowledge, no studies have been done by now and nothing has yet been published on the composition of the local varieties of the pistachio of the East Azarbaijan, Iran.The present work is the first comprehensive scientific study on the characteristics of the oil from the several important varieties of the pistachios grown in East Azarbaijan, Iran.The pistachio samples from Azarshahr, Jolfa, Marand, Soufian and Islamic Island were collected at ripeness.The samples were 2 varieties from Marand, 3 varieties from Islamic Island, 4 varieties from Azarshahr, 4 varieties from Jolfa and 4 varieties from Soufian.The pistachios were obtained from the products harvested in the gardens, each belonging to a particular variety.About 2 kg from each variety was separated in triplicate for further analysis.The chemicals and solvents were obtained from Merck.After peeling the nuts manually, the pistachio kernels were milled and the moisture content was determined at 105 °C for 6 h in an oven according to the method described by Chahed et al. .Oil extraction was accomplished by solvent extraction technique using n-hexane according to the method described by Azadmard-Damirchi et al. .Briefly, the chopped nuts were processed with 30 ml n-hexane and were shaken for 1 h.The extract was filtrated through defatted filter papers under vacuum condition using a Buchner funnel.Then, a rotary evaporator was used under reduced pressure at 40 °C to separate the oil from the solvent.The prepared samples were kept at −18 °C to be further analyzed.To determine oil extraction yield, the weight of the oil extracted from 100 g pistachio samples, was determined .The oil samples were analyzed for acidity, peroxide value and refractive index according to the method of AOAC .The oxidative stability of the pistachio oil samples was evaluated by Rancimat according to the method described by Tabee et al. .Briefly, 2.5 g of the oil samples were placed in the reaction vessel, and the temperature was raised to 110 °C by flowing hot air.The volatile oxidation products were collected in a flask which contained distilled water.The conductivity of distilled water changes as oxidation results in the release of volatile compounds.The oxidative stability index is the time that passes until the changes take place at a high rate, and the induction times are reported in hours.The carotenoid and chlorophyll content of the oil samples was determined according to the method described by Minguez et al. .For spectroscopy analysis, the chlorophyll and carotenoid fractions were extracted in cyclohexane, and the absorption of the extracted pigments was measured at 670 and 470 nm, respectively.The fatty acid methyl esters were obtained from the pistachio oil samples according to the method described by Fathi-Achachlouei and Azadmard-Damirchi .2 ml NaOH in methanol was added to the test tube containing dissolved oil sample in 0.5 ml hexane, and then placed in a 60 °C water bath for 10 min.After adding boron trifluoride in methanol, the 60 °C water bath was used again for more 10 min.Water flow was utilized for cooling the test tubes and then 2 ml of sodium chloride) and 1 ml hexane were added to the mixture.The FAMEs- containing fractions were separated by centrifugation.The FAMEs were analyzed by YL 6100 GC equipped with a flame ionization detector according to the method described by Azadmard-Damirchi and Dutta .A 60 m × 0.25 mm, 0.2 μm film thickness capillary column TR-CN100, Spain) was used for the analysis.A 230 °C temperature was used for the injector and 250 °C for the detector.The initial temperature of the oven was 158 °C increasing to 220 °C, and the rate was 2 °C/min.The samples were held at this temperature for 5 min.The carrier gas used in the GC was Helium, and nitrogen, at a flow rate of 30 ml/min, was used as the make-up gas.The FAMEs were identified based on their retention times and then were compared with those of the standard FAMEs.The tocopherols determination was done by high-performance liquid chromatography according to the method described by Tabee et al. .The LiChroCART 250–4 column packed with LiChrosphere 100 NH2 was used for this purpose.Detection of Tocopherols was performed by Agilent 1260 fluorescence detector.The mobile phase was n-heptane: tert-butyl methyl ether: tetrahydrofuran: methanol mixture, and its flow rate was 1.0 ml/min.The oil samples were saponified, and the phytosterols were prepared as trimethylsilyl ether derivatives.Then the derivatives were analyzed by GC according to the method described by Azadmard-Damirchi and Dutta .A capillary column TRB-STEROL 30 m × 0.22 mm, 0.22 μm, Spain) was used.The gas chromatograph YL 6100 GC had a flame-ionization detector.The temperatures of the injector and detector were 260 °C and 310 °C, respectively.The oven condition was 60 °C for 1 min and then increased to a final temperature of 310 °C at a rate of 40 °C/min and was maintained at this temperature for 27 min.Helium was the carrier gas, nitrogen was used as the make-up gas, and the flow rate was 30 ml/min.All the analyses were done in triplicate, and the reports show the means of the results.The statistical evaluation of the data was conducted using a general linear model analysis of variance."To determine the significant differences among the treatment means, Duncan's multiple range test at a level of P < .01 with Minitab 17 was used.The moisture content of nuts is an important factor in their stability and shelf life.The obtained results showed that the fresh kernels of pistachio samples from different parts of East Azarbaijan had the moisture content ranging from 35.2 to 47.9% .Chahed et al. found the moisture content ranging from 25 to 38.4% for two Tunisian varieties.According to kashaninejad et al. the pistachio kernels had a moisture content ranging from 37 to 40%.Due to their high moisture content, drying is necessary to avoid hydrolytic and oxidative degradation of the oils as well as to prevent microbial spoilage and production of toxins.According to the obtained results, origin and variety had a significant effect on the moisture content .The oil content of seeds and nuts is a major parameter from nutritional and economical points of view.Pistachio is one of the nuts rich in oil.The oil content of the samples showed that it ranged from 49.9 to 58.5% .The lowest and the highest oil content were recorded for Shahpasand and Damgani varieties, respectively.The results showed a significant difference between varieties and regions.These data are concurrent with previously published results .The oil yield of the pistachios from Turkey varied between 57.1 and 58.9% and 56.1–62.6% for Uzun and Siirt varieties respectively .Daneshmandi et al. reported that the oil content of two Iranian pistachio varieties in Khorasan region, Kallequchi and Akbari, was 45.43 and 50.47%, respectively, which was lower than the same varieties from Azarbaijan .The oil content of Iranian pistachios from Damghan ranged between 52.48 and 60.64%.Akbari, kallequchi, and Shahpasand had 60.64, 56.35 and 52.48% oil , which were higher than the data obtained in this study.Differences in the oil contents of pistachio cultivars might arise duo to differences in factors such as growing conditions, harvesting time and climate.One of the important indices of the oil quality is acidity.High acidity shows high free fatty acids content and triacylglycerol hydrolysis.High acidity in oil causes low smoking point and fast oxidation, which make oil less useable in food industry.The acidity values of the oils in the present study ranged from 0.03 to 0.3% .Low acidity in the oil samples of the fresh kernels indicated that hydrolytic rancidity had not occurred.Daneshmandi et al. found the pistachio oil acidities to range from 0.37 to 0.62%.In another study, the acid numbers varied from 0.07 to 0.9% .According to Arena et al. , acidity of Iranian pistachio oil is about 0.65%.Peroxide value is a main indicator of fat oxidation.It is an index to measure the concentration of hydroperoxide, which is formed during lipid oxidation.PV values changed from 0.19 to 1.0 meq O2/kg oil .Arena et al. found the peroxide number of Iranian pistachio oil to be 6.8 meq O2/kg oil.According to another study on pistachio oil in Khorasan region of Iran, the PV of the oil was 2.855 to 3.195 meq O2/kg oil .The oil stability index of pistachio oil at 110 °C ranged from 12.4 to 17 h.According to the obtained results, OSI of the samples from different origins and in different varieties was almost the same .Resistance to oxidation is one of the important parameters in the qualitative evaluation of oil, which is affected by the fatty acids and bioactive components of the oil.Dini et al. reported that the oil rancimat value of Kallequchi, Fandoghi, Akbari and Aghaea varieties in Rafsanjan was 12.68, 12.95, 12.24 and 14.75 h, respectively, at 110 °C, which showed lower stability for Kallequchi and Akbari varieties in Azarbaijan .Generally, the obtained data are in accordance with the previously published results that reported the OSI in raw pistachio oil 16 h .The refractive index of oil is a quickly measured quality parameter.Duo to its ease and relation with the oil structure, RI is used to determine the purity of oils and to monitor the progress of hydrogenation or isomerization occurring in oil.The refractive index of the oil samples was about 1.4.The values of refractive index in all varieties and regions did not vary significantly .These results are in accordance with another study in Turkey.They found the RI of the pistachio oil of different varieties to be 1.46 .Chlorophyll pigments are one of the significant factors to evaluate the quality of oils since they relate to oil color.In sufficiently large concentration, chlorophyll gives the oil a greenish color.However, this pigment may intensify the photo oxidation and off-flavor of the oil, reducing its shelf life.According to the results, Ouhadi variety from Azarshahr had the maximum and Shahpasand variety from Jolfa had the minimum amounts of chlorophyll, 72 and 15 mg pheophytin/kg oil, respectively .The chlorophyll content varied significantly in different varieties and regions.In another study, the chlorophyll content of pistachio oil was reported to be 24.09 mg/100 g .Bellomo et al. found the chlorophyll a content of pistachio oil to be 34.05 and the total chlorophyll to be 62.07 mg/kg at 25 °C.Carotenoids are fat-soluble pigments, responsible for oil color and a precursor of vitamin A. Carotenoids have an important function as an antioxidant, decreasing cardiovascular diseases and the risk of cancer.High contents of carotenoids improve the nutritional value and stability of the oils due to their quenching effect.Carotenoid contents of the oil samples ranged from 5.4 to 11.5 mg/kg oil and showed significant differences in various regions and varieties .Unlike other plants, most nuts are not rich in carotenoids .The total carotene of the pistachios from Turkey was found to be between 1.01 and 4.93 mg/kg .Bellomo and Fallico reported lutein as the major carotenoid of pistachios, varying from 18 to 52 mg/kg dry matter.They found a β- carotene content <1.8 mg/kg.Kornsteiner et al. showed that the β- carotene levels in pistachio ranged from non-detectable to 1.0, and lutein from 1.5 to 9.6 mg/100 g of extracted oil.Environmental conditions may affect the carotenoid contents in plants .Therefore, we can see variations in the carotenoid contents of the oil samples.Pistachios are a good source of fatty acids essential for human nutrition containing saturated, monounsaturated and polyunsaturated fatty acids.<11.6% of the total pistachios fatty acids are saturated .Pistachio oil is considered as an oxidation resistant oil due to its high content of oleic acid and low amounts of polyunsaturated fatty acids.So it can be a suitable oil for cooking and frying.Clinical studies show that pistachio nuts have good effects on serum lipids and reduce the risk of cardiovascular problems.They also have a great potential to prevent cancer and rheumatic diseases. .According to the obtained results, the major fatty acid in the oil samples was oleic.Akbari variety from Azarshahr had the highest and Kallequchi variety from Azarshahr had the lowest oleic acid.It seems that the level of oleic acid did not vary significantly in different varieties and regions .Other main fatty acid components specified in the oil were as follows: linoleic acid, palmitic acid, palmitoleic acid, stearic acid and linolenic acid.Palmitic acid, the major saturated fatty acid, varied significantly among the different varieties and regions except for the varieties from Marand region, which were the same .Oleic acid, as a predominant fatty acid, in Kallequchi, Fandoghi and Akbari varieties from Kerman, Iran, were 52.5, 55.6 and 53%, respectively .Uzun and Siirt, two pistachio varieties from Turkey, had 55.4–62.6% and 60.7–65.5% oleic acid, respectively , which presented similarities with our results.According to Arena et al. , oleic was the main fatty acid in the pistachios from different countries including Iran, and the other fatty acids were as follows: linoleic, palmitic, stearic, palmitoleic and linolenic."Our results are in accordance with many other researchers' results .Tocopherols are one of the natural compounds of vegetable oils valuable from nutritional point of view and their impact on health.Tocopherols are known for their antioxidant properties as well as their critical role in oil stability and shelf life .Total tocopherol content of the oil extracted from pistachio samples was 125-258 mg/kg oil.The predominant tocol was γ- tocopherol in all samples, followed by δ-tocopherol and α- tocopherol.The oil extracted from pistachio samples from Azarshahr had the highest content of total tocopherol .Total tocopherol content of different varieties from Kerman and Rafsanjan in Iran was also reported from 175 to 238 mg/kg oil .Higher γ- tocopherol content in pistachios were also reported by Ozrenk et al. , who studied 14 pistachio genotypes in Turkey containing γ- tocopherol followed by α-tocopherol and δ-tocopherol, which showed lower γ- and δ-tocopherol contents than analyzed samples in this study.Kornsteiner et al. also reported β-and γ- tocopherols total content and δ-tocopherol content in the pistachios.Obtained results showed that the data for tocopherols of oils extracted from pistachios harvested from different regions were generally in the range of previously reported data The analysis showed that the oil samples extracted from different varieties of pistachio nuts did not have tocotrienols in a detectable amount.Sterols content in vegetable oils is considered as an indicator for nutritional and qualitative evaluation of the oil, because of their several health effects and antioxidant activity.Sterol analysis of oils is important in the identity and authenticity control, so the sterol profile is a valuable parameter for the detection of unknown oil and mixtures.According to the obtained results, five sterols were identified for the pistachio oil samples.Total sterol contents of the oil extracted from different pistachio samples were 1125–2784 mg/kg oil.The most abundant sterol in the oils was β-sitosterol, which ranged from 966 to 2419 mg/kg oil belonging to Aghaea varieties from Islamic Island and Soufian, respectively.All the varieties from Soufian had the highest total sterol content and also highest amounts of β-sitosterol.The results indicated that the region had a significant effect in the total sterol content and β-sitosterol amounts of the oil samples .Δ5-avenasterol was the second main sterol in the oil samples followed by campesterol.Stigmasterol and cholesterol were also found in lower levels. .Similar results for pistachio oils were also reported by others .In another study, oil extracted from pistachios had 1920–2100 mg/kg total sterol and 1140–1190 mg/kg β-sitosterol .Also, mean total sterol content was reported 2719 mg/kg oil and the mean value of β-sitosterol as the major sterol was 2548 mg/kg oil, followed by minor sterols such as campesterol and stigmasterol in pistachio oil samples .Results showed that sterol content of pistachio oil samples analyzed in this study were generally in the range of previously reported data .Pistachio kernel oil was collected from different parts of the northwest of Iran to be analyzed for its composition.The results showed that variety and location affect the pistachio oil composition.The obtained data can be useful from nutritional and technological aspects.As the data on the composition of pistachio oil such as fatty acids, sterols and tocopherols are not widespread, the data presented in this study could be helpful and useful for the international standards such as Codex standards establishment.
Pistachio nut as a strategic product has a special place from nutritional and economical points of view. Its quality can be affected by variety, growing origin and climate. In the present study, the chemical composition of pistachio nuts harvested from different parts of East Azarbaijan province in Iran (Jolfa, Marand, Soufian, Azarshahr and Islamic Island) was investigated. The nuts had oil content of 49.9 to 58.5% and moisture content of 35.2 to 47.9. The acidity, peroxide values and the oil stability index (OSI) of the extracted oil were 0.03 to 0.3%, 0.19 to 1.0 meq O2/kg oil and 12.4 to 17 h, respectively. In addition, the chlorophyll content was found to be 15 to 72 mg pheophytin/kg oil and the carotenoid content was 5.4 to 11.5 mg/kg. In the fatty acid composition of the oil samples, oleic acid was the main fatty acid (52.5–63.9%), followed by linoleic acid (27.1–37.2%), palmitic acid (4.6–10.3%), palmitoleic acid (0.6–1.2%), stearic acid (0.1–1.3%) and linolenic acid (0.3–0.4%). Total tocopherol content of the pistachio oil samples was (125–258 mg/kg oil) which γ-tocopherol was the main tocol (112–232 mg/kg oil), followed by δ-tocopherol (5.3–19.3 mg/kg oil) and α- tocopherol (1.2–6.1 mg/kg oil). Total sterol content of the samples found to be (1125–2784 mg/kg oil). The most abundant sterol in the oils was β-sitosterol (966–2419 mg/kg oil) and other detected sterols were Δ5-avenasterol (72.6–170 mg/kg oil), campesterol (47.4–128 mg/kg oil), Stigmasterol (12.3–27.8 mg/kg oil) and cholesterol (8.3–39 mg/kg oil). The obtained results showed that origin had significant effects on oil composition. These data can be used to evaluate the nutritional value and authenticity of the oils and as useful information in establishing standards.
485
Maximising the value of electricity storage
The world’s leaders have now pledged to limit global warming to well below 2 °C, which will require significant increases in the penetration of intermittent renewables, inflexible nuclear generation and carbon capture and storage, together with electrification of heat and transport sectors.This raises considerable challenges in operating future electrical grids both efficiently and reliably.Electricity storage, demand side response, flexible generation and interconnection all offer methods to alleviate these issues .Currently, storage is proving too expensive to make a significant contribution.Whilst much work is being carried out to reduce costs and improve efficiencies, this paper explores how storage can maximise its revenues through operating in multiple markets.Previous works have focused on optimising for a single revenue stream such as arbitrage, use global optimisation tools on specific cases, and typically require perfect or very good foresight of future prices.This work takes an existing algorithm for arbitrage from the EnergyPLAN software by Lund et al. and extends it to co-optimise the provision of reserve, which we show can increase storage revenue by an order of magnitude.A full mathematical description and an open source implementation in MATLAB are given as Supplementary material.The following section evaluates the revenue streams available to storage, barriers to its uptake, and the various technologies available.Section 3 describes the algorithm to optimise the operation of storage for arbitrage, with or without reserve services, under perfect and no foresight of future spot market prices and reserve utilisation.Section 4 gives a demonstration of the algorithm, simulating lithium ion and sodium sulphur batteries operating in the British electricity market.The results evaluate the attainable profits and rates of return within the current UK market, together with a sensitivity analysis of various model inputs and an assessment of storage integrated with a wind farm.Storage has the flexibility to operate within energy market, trading energy to gain from arbitrage, and in ancillary markets, offering reserve, power quality and reliability services.It can also be integrated with existing infrastructure: generators such as wind farms; demand centres; or networks.The spread between daily peak and off-peak electricity prices depends on a multitude of factors: the difference in fuel costs of baseload and peaking generation, the carbon price, the difference in peak and baseload demand, the penetration of renewables and flexible technologies .Similarly, future electrification of heat and transport has the potential to increase or decrease the spread, dependant on the extent to which the demand is managed in terms of spreading the peaks .Storage that relies on daily energy arbitrage is susceptible to changes in the daily spread.Renewables may affect the spread by reducing prices when their output is high .Some storage schemes, such as pumped hydro with very large reservoirs, may be capable of arbitrage over longer timescales, perhaps taking advantage of weekly spreads which are driven by lower demand over weekends, rather than renewable penetration .Wind or PV which coincides with peak demand can reduce the spread.This appears to be the case in Germany, where PV coincides with peak daytime demand and suppresses prices during the day, resulting in lower peak prices which now occur in the morning and evening .British peak prices occur in the evening, and so PV may instead increase the daily spread.Wind power has a less systematic diurnal pattern, but the penetrations seen in Germany and Britain are now sufficient to cause negative electricity prices, and thus increase the daily spread.Fig. 1 displays the average daily spread in Germany since 2002 as a proportion of the median spot price, against the growth of solar PV and wind penetration.Before the rise in PV capacity, the cost difference between coal and gas plants was the main driver ; however, since 2008, the spread has consistently reduced, as the penetration of PV has dramatically increased.The daily demand profile varies significantly between countries.For example, the UK’s peak demand is typically in the evenings, when solar is less likely to displace conventional generation.This greatly reduces its impact on the price spread, though it may still depress average wholesale prices.A second type of revenue that storage can access is from balancing services.In the UK, there are three types :Ancillary and Commercial Services,Contract Notifications Ahead of Gate Closure,Bid – Offer Acceptances,The first includes specific services that are contracted for in advance, namely reserve, response, power quality and reliability services.The income is typically based on utilisation volumes and/or availability offerings.The second enables National Grid to contract directly with parties to purchase or sell electricity ahead of gate closure, typically when it predicts system imbalances may occur ; however, it is rarely used and is hence not considered further .The third type, the ‘balancing mechanism’, operated post gate closure.Generators and consumers can submit bids to buy electricity and offers to sell electricity, indicating the price at which they are willing to deviate from their preferred schedule .The contracted nature of ancillary services results in income streams that are typically more predictable or at least offer some level of certainty, and hence these are considered further for the remainder of this study.Ancillary services consist of frequency response, reserve, black start and reactive power services .In a broad sense, response services balance the power demanded with generation on a second by second basis, whereas reserve provides energy balancing during unforeseen events of longer duration, such as a tripped generator or incorrectly forecast demand.Black start is required in case of total or partial transmission system failure, to gradually start up power stations and link together in an island system.Finally reactive power services involve maintaining adequate voltages across the transmission network, though such a service may also be useful on distribution networks.A more detailed description of these is given in the online supplement.It is likely that storage has roles to play in all four elements of ancillary services; however, we focus on the provision of reserve, and specifically short term operating reserve for reasons of data availability.STOR is a commercially tendered service, where a constant contracted level of active power is delivered on instruction from National Grid, typically when demand is greater than forecast or to cover for unforeseen generation unavailability.The service only requires participants to be available during predefined availability windows, with typically two to three occurring per day .Participants are expected to deliver within 4 h of instruction, with a minimum capability of delivering 3 MW for 2 h, followed by a maximum 20 h recovery period .In 2012/13, the majority of units were less than 10 MW in capacity, with typical utilisation times of 90 min .Providers are selected through competitive tenders based on economic value, historic reliability and geographic location .Committed providers are expected to remain available for all windows over a season, meaning they cannot generate for other services.The volume of the British electricity market averages ∼850 GWh per day, and peaks at a daily-average of 53 GW.For arbitrage on a daily level, an average of 67 GWh per day and up to 13 GW could be moved before the diurnal profile was completely flattened.For reserve, STOR holdings of ∼2.3 GW are currently considered optimal , with a mean daily utilisation of 0.69 GWh between April 2014 and March 2015 .Optimal fast reserve holdings were typically ∼300 MW , with a mean daily utilisation of 0.74 GWh throughout 2014/2015 .Between November 2015 and March 2016, this has increased to 600 MW during the morning and evening.Response holdings are dependent on total demand, the largest expected single loss of generation, and output of intermitted generation.Hence holdings are higher during summer and overnight, when demand is relatively low.Typical minimum daily holdings range between ∼400 and ∼700 MW for primary response, ∼1200 and ∼1450 MW for secondary response and ∼0 MW and ∼150 MW for high frequency response, dependant on time of year .However diurnal variation is particularly significant for primary and high frequency response, where early morning summer requirements often exceed 1350 MW and 390 MW respectively .Response, reserve and reactive power services are remunerated for both availability and utilisation.Services that include availability windows also receive window initiation payments to compensate the participant for readying their plant prior to each window.The total annual spending on each service by National Grid typically ranges from £50 m to £150 m per year .The market size for shorter timescale services is greater, suggesting storage with fast response times would have the potential to access greater revenue streams.This is in agreement with Strbac et al., who suggest shorter duration storage has much greater value .Historically, the level of reserve services procured are set to cover three standard deviations of uncertainty, hence can accommodate over 99% of unexpected fluctuations .The uncertainty is formed of error in both the forecast demand and supply.The latter includes unexpected plant outages, loss of the single largest generating unit, and imperfect forecasts for weather-dependent renewables output.Recent forecast requirements for primary and high response have approximately doubled , in preparation for larger units connecting to the system and in response to the dramatic increase in wind and solar capacity.Intermittency increases the standard deviation of supply fluctuations, however the increase is only moderate due to smoothing of outputs up to an hour ahead, and good forecast accuracy up to several hours ahead .The increase does however lead to greater demand for flexible products that can change output rapidly many times per day, as well as maintain a very low or zero standby level .Yet increasing the holding of products such as STOR may not be the most cost effective way to deal with intermittency .Balancing requirements for wind continuously vary every hour, day or week, whereas STOR is fixed for an entire season.Hence in the future, this could lead to the introduction of new balancing services.A review by Gross et al. found six of seven studies quoting increases in overall reserve requirements of between 3 and 9% for a 20% penetration of intermittent generation .It is worth noting, that current reserve required to cover wind and PV total about 17% of their output .Other factors such as electrification of heat and transport may also have an effect by making demand more variable between periods and increasing forecast errors , together with an increase in power plant genset sizes resulting in higher response and reserve requirements .Further sources of revenue include integrating storage with generators, demand centres or networks.Generators such as wind farms may benefit by utilising storage to improve delivery forecasts and thus reduce balancing costs, and by shifting the time of delivery to sell for higher prices.This is particularly pertinent if wind penetration increases due to its effect on suppressing spot prices during periods of high national wind output .Many wind farms currently operate under a power purchase agreement, which typically purchase all wind output at a fixed price .This offers a price guarantee, at the expense of including a risk premium.Control over when electricity is delivered may enable better terms to be gained as part of a PPA, or the confidence to operate directly on the spot market.Finally, storage could also be useful if in the future wind farms are offered non-firm connections, i.e. if they are not entitled to receive constraint payments.Storage can also prove useful for demand sources.Customers on time-of-use tariffs can reduce imports from the grid at times of high prices, as well as reduce network service charges, for instance through triad avoidance .Finally networks may also benefit from storage through deferral of transmission or distribution reinforcement.This is particularly beneficial to distributed storage, in avoiding the significant cost of upgrading distribution networks to meet any future increases in peak demand .However, transmission network operators in the EU are not allowed to own storage assets as they are currently classed as generators.This, together with further barriers to storage, is discussed in the next section.Investment in storage faces many barriers because of current policy and regulation, which are comprehensively reviewed by Anuta et al. and Grünewald et al. .Energy storage systems are multifunctional, and may act as generator, consumer or network asset at different points in time or simultaneously.Current regulation classifies storage based on its primary function , leading to issues with ownership.According to EU law , transmission network operators are forbidden from participating in the electricity markets, and hence would be unable to supplement their return on storage devices through competitive market participation.Whether storage is classed as a generator or consumer also impacts on transmission and distribution use-of-system charges.If a consumer, then often consumers are subject to taxes to subsidise renewables .A new asset class for storage could overcome these issues.Other than pumped hydro, storage technologies are still largely developing, hence there is currently a lack of standards on their design, deployment and evaluation of their economic value .For a network operator, investment in traditional network assets offers a low risk investment with guaranteed revenue streams.In contrast, the high capital costs of storage, uncertain future income streams and lack of storage precedents, result in high risk proposition .Hence storage may not ‘fit’ into the business model of traditional transmission system operators, relying instead on competitive market participants.Furthermore, the benefits storage may offer to grid or centralised generator utilisation and corresponding cost and efficiency benefits are difficult to quantify, although this paper aims to make this more straightforward in future.The fixed premia widely used to incentivise renewable generation do not reward dispatchable facilities and are often accompanied by export guarantees .Hence renewable generation often operates at the expense of conventional plant, increasing system-wide integration costs through displacing more energy than capacity, and decreasing asset utilisation.As these costs are socialised, there is a lack of transparency over the true costs of inflexible renewable generation.A two-tier tariff could incentivise owners of renewable energy plants to provide dispatchable energy, as is the case on some Greek islands .Whilst storage may provide indirect benefits to renewables in terms of reduced curtailment and hence increased penetration, the electricity itself that is stored may or may not be sourced purely from renewables, if the storage device is connected directly to the grid.This creates difficulty in terms of subsidising storage as a renewable device.However Krajačić et al. propose that a guarantee of origin scheme could alleviate such issues .Even so, under current rules, electricity from renewables that charges storage before entering the grid cannot receive subsidies .Therefore, connecting a wind farm to a storage device would forfeit any renewable incentives.Power quality is likely to deteriorate as the penetration of renewable energy increases, particularly distributed solar PV or other domestic microgeneration .However, currently there is no incentive to improve power quality and it is difficult to quantify .In liberalised electricity markets, the reserve market may provide a significant income stream for storage technologies .According to Wasowicz et al., revenue increases between 6.2% and 19.2% could be obtained for storage operators in Germany if grid support was supplemented with reserve services .However, the state of charge of some storage devices may not be precisely known, hindering its operation in the reserve markets .It is currently estimated that 5% of all trades in the UK market occur on the spot market .The remainder are executed under opaque bilateral contracts, and often between a supplier and its generation arm.This leads to low liquidity in the spot market, increasing the entry barrier to small scale storage and new entrants, as is the case currently with distributed generation .According to Ferreira et al., remuneration for ancillary services within the EU are currently insufficient to make storage economically viable .Storage is not rewarded for its higher accuracy, faster response and greater ramp rates in comparison to conventional ancillary service providers.In the US however, regulation changes in 2013 stipulate that improved performance is now valued .Storage devices can provide a better service than gas turbines and engines, meaning that the same level of service could theoretically be provided with fewer MW of capacity; however, there is as yet no financial premium available for this.It is worth highlighting the importance of small scale distributed storage, particularly for distribution network operators.This could help mitigate peaks caused by future electrification of heat and transport , and to increase the penetration of distributed generation that can be managed with existing infrastructure.Electrification is an essential part of national decarbonisation strategies across Europe, but will radically alter the profile of electricity demand.For example, a million heat pumps or electric vehicles are estimated to add 1.5 GW to peak demand in Britain and Germany .The distribution cables that serve individual buildings were not designed to handle reverse power flows, where embedded solar panels and combined heat and power units export up to higher-voltage parts of the network .As this ‘last mile’ of the network is mostly buried under streets, it will be prohibitively expensive to reinforce, and so operators are considering storage as a lower-cost route to balancing microgeneration.Despite this, current policy development tends to focus on large scale storage .Furthermore, regulation changes could enable DNOs to operate in an active manner, undertaking regional balancing services to better manage power quality and network utilisation .Storage could then be used as a regulated asset.There are many excellent reviews of the storage technologies available , hence this section simply aims to summarise key points regarding use, and recent data on cost and efficiencies.The technologies broadly fit into three categories: bulk storage which operates over timescales of several hours to weeks; load shifting; and power quality .At the extreme, the UK can store around 50 TWh of natural gas, capable of discharging over 11 weeks .This highlights the potential scale at which hydrogen or synthetic natural gas could be stored, with the ability to operate over seasonal timescales.Pumped hydro storage and compressed air energy storage are the other bulk storage technologies, on the scale of GW and GWh.The UK hosts around 2.5 GW and 25 GWh of pumped hydro, split across four facilities.Battery technologies show lower capacities, and discharge over shorter timescales between several minutes to several hours .Conventional batteries have higher costs per kWh stored as they require fixed reagents, rather than large natural features to store energy.Flow batteries could attain lower specific energy costs as the reagent volume could be increased with a simple storage tank; however their current low volume of manufacture retains higher costs.The modular nature of batteries favours distributed storage; however the linear economies of scale mean that the cell cost per kW or kWh are similar when moving from residential to utility scale batteries, although the balance of plant costs can reduce dramatically.Finally, electrochemical capacitors and flywheels display low energy to power ratios of less than 1, discharging in the seconds to minutes range .The specific costs of the different technologies per kW and kWh are shown in Fig. 2 alongside their round-trip efficiencies, based on systematic reviews of hundreds of sources .It is clear that there is significant divergence between the cost per unit power and unit energy.Bulk energy stores such as PHS and CAES tend to exhibit the lowest $/kWh, as they benefit from economies of scale in storage capacity, but also exhibit the lowest efficiencies.Conversely, electrochemical capacitors exhibit relatively low $/kW, but extremely high $/kWh of over $10,000/kWh.This diversity in price and performance highlights the need for a range of market products to allow the different technologies to capture their true value.Bulk energy stores may find arbitrage a viable strategy, however electrochemical capacitors obviously require a market that can adequately reward its extremely fast response and ability to deliver high powers for very short times.Most previous studies that attempt to optimise the control of storage tend to perform global optimisation using mixed-integer linear programming, either to optimise for system-wide benefits or an independent investor.In addition, many previous works have looked only at arbitrage as a revenue source, assuming a price taker analysis .Wasowicz et al. includes more applications, obtaining a multi-market optimisation but only under certainty .In particular, they investigated the effect of grid congestion, storage technology and regulatory changes on the economic viability for an independent investor.Sioshansi et al. investigated the impact large amounts of storage would have on the price spread and value of arbitrage by correlating historic prices to total system demand, and evaluating the extent to which storage would flatten peak demand and off-peak demand .Connolly et al. tested practical control strategies for PHS, involving historical and future price forecasts of up to 24 h .They found that on average, their practical strategy of optimising only 24 h ahead gained 97% of the truly optimal profits, however such a strategy requires good price prognoses.In addition, the model only looked at arbitrage, ignoring potential revenues from alternative markets.Similarly, Bathurst & Strbac relied on accurate forecasts of imbalance prices to investigate the integration of storage with wind, optimising the balance between reducing imbalance charges and gaining from arbitrage .Therefore the aim of this project was to develop a simple algorithm that could optimise multiple revenue streams without the need for foresight.In particular, a simple method that could be run quickly and easily was desired, over a globally optimal solution.Hence the remainder of this paper sets out to explain the algorithm developed and subsequently the key findings.The aim of this work was to design and demonstrate a simple algorithm to optimise storage operation for multiple revenue streams: arbitrage, reserve and coupling with a wind farm.We take a deterministic algorithm from Lund et al. and Connolly et al. that finds optimal operation for arbitrage, and add reserve and wind coupling, and demonstrate a selection of findings.The algorithm is technology neutral, and capable of simulating storage for power applications and for bulk energy applications.Throughout this paper we compare three scenarios:Arbitrage Only – ‘ArbOnly’,Arbitrage with reserve, only taking availability payments – ‘ArbAv’,Arbitrage with reserve, also taking utilisation payments – ‘ArbAvUt’,The two reserve scenarios are designed to explore the minimum and expected levels of income from providing reserve.ArbAv gives the lower bound: earning fixed availability payments for having the store available for reserve provision, but never receiving additional payments for actually providing reserve energy.This requires the store to maintiain charge levels above a set limit and forgo earning revenue from arbitrage during availability windows.ArbAvUt provides a central estimate: earning the availability payments as above and additionally utilisation payments based on the historic need for reserve, which are typically much higher than earnings from arbitrage.Analyses are carried out both under perfect foresight and no foresight of future market prices and reserve utilisations.Perfect foresight is useful to gauge the maximum value obtainable from a storage device, or if a sound future price prognosis is available.No foresight accepts that future prices and utilisation volumes remain unknown and optimises accordingly.In the case of ArbAvUt, the use of NF offers a practical tool, where the model could provide guidance on optimal operation based on live updates of utilisation levels.Nevertheless, in all scenarios, it is assumed the storage operator has access to wholesale market prices, that vary for each half hour settlement period.The reserve scenarios are based on the 2013/14 STOR year, primarily because of good data availability for STOR in that year.Alternative balancing services do not provide such granular data, and hence to avoid making many gross assumptions, the model is based on the STOR market.In reality, other ancillary services may tailor better towards storage’s fast response times.Nevertheless, if better data became available, the model principles could be adapted to the details of other services.Fig. 3 outlines the overall process the algorithm follows.If no reserve services are offered, then arbitrage is optimised over all settlement periods and profits are calculated.If reserve services are offered, then it is assumed that no operation is permitted within availability windows unless called upon for reserve.Reserve utilisation and availability services are then implemented within the availability windows and total profits calculated.The two options are discussed further below, with subroutines A to D detailed in the online supplement.The EnergyPLAN algorithm for arbitrage described by Lund et al. works by finding optimal charge-discharge pairs: the period with maximum price where discharging should occur, and a corresponding period with minimum price where recharging should occur.If the device can be fully utilised during these periods then they are removed from the series, and the next charge-discharge pair is found.Charge-discharge pairs are only accepted if they are profitable, accounting for the round trip efficiency of the storage device and other marginal costs.Low efficiency devices, or periods with homogenous prices will therefore see limited utilisation.On the first iteration there are no constraints on which hours or how much capacity is accessible, and so these will be the maximum and minimum priced hours respectively.As the algorithm progresses, constraints on when recharging can occur become binding, so as not to exceed the maximum or minimum possible charge levels.Lund shows that this arrives at the global optimum for profit , which we confirmed using a simple linear program written in GAMS.A simple example is presented in Fig. 4, and the online supplement gives a full mathematical description.The extension of this algorithm to consider reserve consists of four parts:First, energy prices are removed during the windows where reserve is provided,The device is then optimised for arbitrage outside of these windows,Additional discharge due to reserve utilisation are added onto the profile,Finally, the operation outside of availability windows is modified to recover any discharges due to reserve utilisation, and ensure that additional constraints are met.The algorithm initially optimises for arbitrage in all periods outside of availability windows, as it is assumed the device is forbidden from providing arbitrage when committed to provide reserve.Reserve services are then introduced through a further step.Two scenarios representing extremes of income are considered: with no utilisation, and with typical utilisation.In both cases, identical remuneration for availability is received, however the later receives additional payments for energy discharged during availability windows, at the request of National Grid.For the ArbAvUt scenario, the utilisation volumes are determined based on the input utilisation price and data for STOR utilisation provided by National Grid .These volumes are then applied during availability windows.We take historic STOR utilisation from 2013/14 STOR year .This gives the average daily profile for working and non-working days during each season, and the total volume for each day of the year.We combine these to form an estimated half-hourly profile of STOR demand, and interpolate the price offered for this utilisation from the supply curve for that season .For the ArbAv scenario, there is no utilisation hence no change in charge level during availability windows.For ArbAvUt, the charge level will decrease during some windows, meaning that additional recharging will be needed between windows.The algorithm then checks that three conditions are met:Minimum charge level prior to every availability window,Charge level during all periods is less than or equal to maximum capacity,Charge level during all periods is greater than or equal to zero,If any are not met, charging/discharging is altered during the most economical feasible periods to accommodate the conditions.Fig. 5 gives a graphical explanation of this process, and a mathematical description is provided in the online supplement.Under perfect foresight, future market prices are known precisely, hence the storage can take advantage of fluctuations in the market price over periods of hours, days or even months for large capacity seasonal storage.This approach is only practical if a good price prognosis is available, or if used to evaluate the return on storage under future market price projections.In reality, future prices and utilisation volumes are not known in advance, hence the algorithm’s data inputs were modified to operate with no foresight.For the arbitrage-only scenario, a future price series was estimated based on the average daily price profile for each season in the previous year.The model was run with the estimated price series, and the resulting operation profile was combined with the real outturn prices to calculate the profits.The ArbAv and ArbAvUt scenarios also use these estimated price series to optimise for arbitrage outside availability windows.To accommodate no foresight of future utilisation volumes, the algorithm was modified to form a stepwise process.Utilisation volumes for the first window are revealed, with any recharging to meet the next minimum level requirement, and any corrective actions to retain the charge level between zero and full capacity, made after the first window and before the next.The process is then repeated for subsequent windows.In contrast, under perfect foresight there was no constraint on when these actions could take place, i.e. they could occur any time prior to or after the corresponding window.This method is further explained and visualised in the online supplement.The above methods were also applied in conjunction with a wind farm, to improve control over when the electricity is delivered.In effect, the wind farm is able to perform arbitrage, with the constraint that storage charging is limited to the output of the wind farm – as in Fig. 6.Hence the scenario considered is arbitrage under perfect foresight.We use Whitelee wind farm as an example, taking its final physical notifications of output during the 2013/14 season, retrieved from Elexon.Following the DOE/EPRI convention, the capital cost of battery systems was represented by the sum of a power and energy term, to allow systems of different c-rates to be compared.We ignore economies of scale for battery production, and assume that the specific cost is constant regardless of battery capacity.In addition, an efficiency of 80% and cycle life of 5500 cycles was assumed .Note that lifetime is defined as the number of cycles before a 20–30% drop in capacity is observed.Hence the battery may be able to continue running post this period, but with reduced storage capacity and potentially lower power outputs due to an increase in internal resistance .These effects have not been accounted for in this analysis.For lithium ion batteries, capital costs of 1000 $/kW plus 700 $/kWh were assumed, with operational costs of 9.2 $/kW/yr and an efficiency of 90% .A lifetime of 6000 cycles was assumed .For both technologies, costs of capital have been ignored.For both NaS and Li batteries we note that there is a broad range for system costs and lifetimes, and as these are rapidly evolving any choice of cost data will soon be obsolete.We choose a single, central value for each technology to perform the financial case study, and note that our primary metric scales inversely with capital cost.If, for example, capital costs fall by 50% from the values listed above, then annual returns will be double those presented in our results.This method of calculating profits means that any charging outside of availability windows is associated with the arbitrage component – including charging in preparation for STOR utilisation during a window.Devices can therefore register a financial loss from arbitrage when providing reserve utilisation.This section first explores the impact of various technology parameters upon operating profile, profits and rates of return, all under perfect foresight.Some example applications are then demonstrated, calculating the rate of return for two battery technologies and its sensitivity to reserve utilisation and the introduction of no foresight.Lithium ion and sodium sulphur batteries are used as exemplary technologies, both being relatively well developed for stationary storage, and possessing different cost and technical characteristics.Finally the output of a wind farm is integrated, to observe the value that storage may provide to farm operators, by shifting delivery of electricity from periods of high wind output and low price, to periods of low wind output and high price.All scenarios are based on historic half-hourly price data from the British electricity market, assessed over the period 01-04-2013 to 31-03-2014 unless otherwise stated.This section evaluates the effect of round trip efficiency on profits for our three scenarios under perfect foresight:Arbitrage only – ‘ArbOnly’,Arbitrage with availability – ‘ArbAv’,Arbitrage with availability and utilisation – ‘ArbAvUt’,Assumptions include: c-rate of 0.1, STOR utilisation price of 89 £/MWh, availability price of 5 £/MWh and no marginal costs of charging/discharging other than electricity purchased.The total specific profit for each scenario at various efficiencies is shown in Fig. 7.At 100% efficiency, ArbOnly offers a specific profit of approximately 70 £/kW/yr, which lies between the values offered by scenarios ArbAv and ArbAvUt.However for efficiencies below 72%, it is more profitable to offer reserve services even with no utilisation, than purely perform arbitrage.This is a result of the fixed payments for available capacity, which are independent of energy production and hence efficiency.This also leads the ArbAv scenario to plateau at even lower efficiencies.This is particularly pertinent for technologies such as compressed air and hydrogen storage, which exhibit round trip efficiencies in the range of 54–74% and 41–49 % respectively .ArbAvUt offers the greatest specific profits, with smaller devices exhibiting higher values than larger devices.This is discussed further in Section 4.2.It is worth noting the significant reduction in profits in line with efficiency.A 3 MW/30 MWh device would gain 195 £/kW/yr if it were perfectly efficient, but only 113 £/kW/yr if it were 70% efficient.A further breakdown into sub components of arbitrage – ‘Arb’, availability – ‘Avail’, and utilisation – ‘Util’ for each of the three scenarios is presented in Fig. 8 for a 100 MW/1000 MWh device.Also compared are operation profiles for round trip efficiencies of 70% and 100% over the first 4 days of the year.There is stark difference between the operation profiles in Fig. 8a, highlighting the impact of low efficiency resulting in fewer periods where the price spread can make up for efficiency losses.This is further accentuated in the ArbAv scenario, comparing Fig. 8b.Many of the previous periods of high price are inaccessible for arbitrage due to overlap with availability windows; however, the availability payments make up for this.Finally for the first two days of the ArbAvUt scenario, no discharging occurs outside of availability windows, hence no positive profit is associated with arbitrage during this time.In fact, from Fig. 8, it is clear that the arbitrage profit component for this scenario is negative at all efficiencies.The arbitrage component consists of all charging/discharging outside of availability windows, which includes the profit generated from arbitrage, as well as the cost of charging to cover the utilisation during availability windows.Hence in the ArbAvUt scenario, as the charge efficiency drops the cost of charging in preparation for utilisation increases, resulting in an increasingly negative arb component.The specific profit is independent of discharge capacity for the ArbOnly and ArbAv scenarios.However it does affect the ArbAvUt scenario, via interaction with the volume of STOR utilisation that occurs.As STOR requires a generator to run at a fixed output level, smaller discharge capacities can be utilised more often.A 10 MW device returns a specific profit of 180 £/kW/yr, whilst a 100 MW device returns 89.3 £/kW/yr, assuming a constant c-rate of 0.1, round trip efficiency of 1, STOR utilisation price of 89 £/MWh, availability price of 5 £/MWh and no marginal costs of charging/discharging.As the discharge capacity reduces, the profit attributed to utilisation increases, in line with an increase in STOR utilisation.In actual fact, the utilisation price was set to 89 £/MWh, which essentially places this device first in the ‘merit order’ for STOR despatch .Thus the assumptions of the model mean that despatch occurs as long as national demand for STOR is greater than the device’s discharge capacity.Whilst the specific profit earned is greatest for smaller devices, the absolute profit increases with size.Fig. 9 highlights this, where the highest utilisation profit component is obtained for a 100 MW device.Above this, the increase in MW offered is outweighed by the reduction in the number of times the device is called upon, resulting in a net reduction in utilisation MWh.Naturally however, the availability and arbitrage components increase in an approximately linear fashion, resulting in an overall increase in total profits.To place the specific profits discussed earlier into context, rates of return on exemplar sodium sulphur and lithium ion batteries have been evaluated.The specific profit for the ArbOnly and ArbAv scenarios is dependent on the efficiency and c-rate, but is independent of discharge capacity.Fig. 10a and b present the variation of specific profit and rate of return with c-rate.Li batteries achieve greater specific profits due to their efficiency advantage, but the lower cost of NaS batteries result in higher rates of return.Furthermore, the greatest specific profits do not result in the greatest rates of return, as the increase in capital cost outweighs the additional revenue captured.Nevertheless, a peak rate of return of only 1.98% is achieved, which is too low to be viable, as discussed in Section 4.4.Fig. 11 displays the rates of return and the specific profits obtained under the ArbAvUt scenario for NaS and Li batteries.Various storage capacities are displayed, with c-rates ranging 0.1–1.Devices with lower discharge capacities tend to exhibit greater specific profits.Moreover, devices with the lowest c-rates, tend to offer the highest specific profits.This is due to the nature of the measure of specific profit, where inevitably devices with equal discharge capacities but larger energy stores are able to capture greater profits.However, the highest specific profits do not result in the highest rates of return).For a given capacity, there appears an optimal c-rate to maximise rate of return.This behaviour is a result of the interaction between a reduction in specific profits as c-rates increase, but also a reduction in specific capital costs.For instance, a 3 MW/30 MWh sodium sulphur battery may achieve specific profits of 142.3 £/kW/yr, at a capital cost of 2796 £/kW).However, a 12 MW/30 MWh NaS battery may achieve specific profits of 73.2 £/kW/yr, but with capital costs of 936 £/kW.Hence the latter gains a net benefit, returning 7.5% compared to 5.0% for the former.When comparing the two battery types, the lower cost of NaS results in rates of return: up to 7.5%, compared to 4.4% for Li batteries.However, these values are too low to be viable.For the NaS battery, the assumed lifetime of 5500 full charge-discharge cycles is equivalent to 7.5 years under this scenario, which means the minimum rate of return to break even would be 13.3% due to depreciation.For the Li battery, this minimum is 12.5%.These minimums ignore the cost of financing and the time value of money; with a 5% discount rate, the Li break-even rate would be 15.5%, with a 10% rate it would be 18.8%.However, the cycle life assumes end of life is when 80% of the original capacity remains.For grid storage, it may be worthwhile to continue operation beyond this point.Despite this, some level of cost reduction, efficiency increase, or in particular increase in lifetimes would be necessary.In order to test the sensitivity of the model to STOR utilisation volumes, randomness was introduced to the estimated national STOR demand profile by multiplying the hourly profile by a brown noise signal and rescaling to the original annual level of national STOR demand.The desired impact of introducing randomness was to change the timing of utilisation, and hence evaluate the impact upon the profits associated with the arbitrage component.However, whilst over a year the total volume of national demand is unchanged, the volume that is accessible to the storage device does change, due to the constraint that national demand must exceed the device’s contacted output MW level.Hence the two effects are distinguished below.The model was run 500 times with Monte Carlo inputs, which saw the total device utilisation volumes vary by 23.6% from minimum to maximum.The variation found in the total profit was 8%, of which only 1.4% was directly attributable to the different timings of utilisation, the balance a result of changing utilisation volumes.This variation is the result of shifting the time of utilisation, and hence the times when charging occurs in preparation of the availability windows.The previous sections have discussed the model under perfect foresight.This section explores the profits that can be gained where operating under no foresight.To reiterate the method, this implies the input price stream is an estimate based on past averages, and that STOR utilisation volumes are not known to the storage operator ahead of time.For a round trip efficiency of 1, the profits with no foresight range between 88% and 98% of those with perfect foresight.With an efficiency of 0.8, this drops to 75% for ArbOnly and 96% for ArbAvUt.The certainty of availability payments makes reserve more favourable with no foresight: the ArbAv scenario is more profitable than the ArbOnly scenario for efficiencies of less than 0.72 with perfect foresight; however for no foresight, this crossover point increases to 0.85.These observations can be explained by the two factors that no foresight introduces:The use of estimated future prices, which affects the arbitrage component of all scenarios.This is due to the difference between the estimated and real prices.The ArbOnly scenario is most exposed as all profits are derived from arbitrage, whereas for the ArbAv and ArbAvUt scenarios, the proportion of profits from arbitrage are lower, due to the fixed availability and utilisation payments.Furthermore, the sensitivity of ArbOnly with no foresight to efficiency is likely due to the significantly fewer hours over which arbitrage operates at lower efficiencies.Hence any discrepancies between the predicted and actual prices are magnified.At very low efficiencies, hardly any arbitrage is performed at all, resulting in the convergence of the profits for ArbOnly with perfect and no foresight.The unknown future STOR volumes, which results in increased restrictions over when corrective action can take place.For instance, following utilisation in an availability window, the storage device may have to charge up prior to the next window, even if the price is high.Under perfect foresight, advanced planning is effectively permitted, such that the device could charge up ahead of both windows, avoiding the high prices in between.A final point of note is that for the ArbOnly scenario, the algorithm with no foresight achieves 88% of the optimal profits, with an efficiency of 1.This can be considered quite high for having simply used a price profile based on averages over the previous year’s STOR season.This is due to the importance of profile shape for arbitrage rather than mean value: it is safe to assume that on most days, discharging between 5–7pm would be optimal.Fig. 12a and b present results for integrating a storage device with the 322 MW Whitelee wind farm during 2013/14, considering arbitrage under perfect foresight.The figures display the rates of return for NaS and Li batteries with the same specifications as given in Section 4.3, of various capacities and c-rates.The returns are based on additional profits over and above selling the wind farm’s output directly on the spot market.Greatest returns were obtained for the smallest capacities, as the arbitrage benefits diminish as more storage is employed.The optimal c-rate is between 0.3 and 0.4, implying that around 1 MW of battery capacity was optimal.Note that we ignore economies of scale in producing batteries, and so this result may change if larger batteries are significantly cheaper per MW.The maximum rates of return of only 1.89% and 1.22% was recorded for NaS and Li batteries respectively, hence battery-based arbitrage with a wind farm is not viable with current battery costs and wholesale prices.Either further revenue streams must be sought, the capital cost of storage must dramatically fall, or the value of shifting the time of delivery must increase.The latter may occur in the future if wind penetration increases, and hence periods of high output depress the spot price more markedly.Alternatively, integrating storage with a wind farm enables some level of control over the farm’s output.This could provide operators with enough confidence to sell output directly on the spot market as opposed to via a power purchase agreement.The fixed price per MWh offered in PPAs is lower than the average spot price of power as the counterparty is exposed to the risk of price volatility.If this risk premium is assumed to be 10% of the output-weighted average spot price, this would result in an additional income of 4,200,000 £/annum, irrespective of storage size.It is not meaningful to add this to the rate of return of the storage device as this is a qualitative benefit, based on operator confidence in the marketplace.The size of storage has plays a qualitative role in reducing perceived investor risk.As the penetration low carbon intermittent or inflexible forms of generation increase, system integration costs inevitably rise.Storage offers a solution to limit these costs, however to date it is still considered too costly to be an effective solution.Either costs have to decrease or storage operators have to maximise use of the devices to obtain as much profit as possible.Most studies in literature have aimed to optimise some form of storage either for a single revenue stream such as arbitrage, or performed analyses using computationally expensive global optimisation tools.Additionally, a good prognosis of future prices was typically required.This research has developed and demonstrated a simple, generic algorithm that can optimise a storage device for arbitrage, with or without reserve services, under both perfect and no foresight.We make the Matlab implementation of this algorithm available to the community to help foster future research.For an exemplar sodium sulphur battery, the maximum annual rate of return obtained for performing arbitrage only in the British market was 1.98%, but this increased to 7.50% for arbitrage with reserve, both under perfect foresight.For a lithium battery, returns were lower at 1.28% and 4.4%, due to the higher capital cost.Operation under no foresight was found to reduce profits by 5–25%.Also, integrating a sodium sulphur battery with a wind farm to shift time of delivery was found to produce a maximum rate of return of 1.89%, compared to 1.22% for a lithium battery.With current battery lifetimes and electricity prices, the rates of return obtained even under perfect foresight are unlikely to prove viable.Either costs must reduce, alternative technologies with longer lifetimes must be sought, additional revenue streams must become accessible, or the fundamental dynamics of the electricity market must change.Despite finding that storage would not be viable in any of the considered scenarios, the algorithm developed was successful at providing a simple means of optimising the control of storage, and future work should extend it to further revenues streams.In particular, the lack of transparent data resulted in the use of STOR market data for reserve services, however the technical properties of storage mean that it may gain greater benefit from operating in shorter timescale markets, such as fast reserve or response.Alternative income may also be gained from:Triad avoidance in order to minimise transmission use of system charges;,Reduce imbalance costs for a wind farm;,Participate in flexible reserve in conjunction with a wind farm.The algorithm could also be further developed to include impacts on lifetime within the decision process.Currently lifetime is post-processed as a result rather than an optimisation variable.Furthermore, refinements to the no foresight algorithm could be made through improved forecasting of future prices using correlation with temperature forecasts.
Grid-scale energy storage promises to reduce the cost of decarbonising electricity, but is not yet economically viable. Either costs must fall, or revenue must be extracted from more of the services that storage provides the electricity system. To help understand the economic prospects for storage, we review the sources of revenue available and the barriers faced in accessing them. We then demonstrate a simple algorithm that maximises the profit from storage providing arbitrage with reserve under both perfect and no foresight, which avoids complex linear programming techniques. This is made open source and freely available to help promote further research. We demonstrate that battery systems in the UK could triple their profits by participating in the reserve market rather than just providing arbitrage. With no foresight of future prices, 75–95% of the optimal profits are gained. In addition, we model a battery combined with a 322 MW wind farm to evaluate the benefits of shifting time of delivery. The revenues currently available are not sufficient to justify the current investment costs for battery technologies, and so further revenue streams and cost reductions are required.
486
Gaze gesture based human robot interaction for laparoscopic surgery
Technological advances over the past decade have enabled the routine use of Minimally Invasive Surgery in an increasing number of clinical specialities.MIS offers several benefits to patients including a reduction in operating trauma, post-operative pain, and faster recovery times.It has also led to budgetary benefits for hospitals through cost savings from reduced hospitalisation duration.Performing laparoscopic surgery requires bimanual manipulation of surgical instruments by the surgeon.The field-of-view of laparoscopic cameras is usually very narrow.In order to assist with the navigation during the operation, a surgical assistant usually manoeuvres the laparoscope camera on behalf of the operating surgeon.Understanding the surgeon’s desired FOV, and communication via verbal instruction can be challenging.Failure to provide good visualisation of the operating field not only induces greater mental workload on the surgeon, but also leads to unrecognised collateral injuries.The need for good camera handling has been recognised as an important step in the training curricula for surgical residents.In spite of such training, issues related to assistant fatigue, hand tremor and the confined shared workspace for the surgeon and camera assistant persist.There are also man-power and cost implications to requiring a highly skilled camera assistant, which if addressed, could allow a surgeon to operate solo, thereby improving cost and staffing efficiency.In order to address the above deficiencies, a number of commercial robotic assisted camera systems have been developed.These include the EndoAssist, which is controlled by the user’s head-mounted infrared emitter, the verbally controlled Automatic Endoscope Optimal Position system, and the finger stick controlled SoloAssist from AktorMed.Gaze information obtained by a remote eye tracker, i.e. where the person is looking, can be used to create a gaze contingent control system to move the camera.With this gaze contingent control, it is possible to move the camera by following the user’s gaze position on the target anatomy.A number of gaze contingent robotic assisted camera systems have been developed, as well as methods using gaze data to recover 3D fixation points and perform camera motion stabilisation.Existing gaze contingent camera systems, however, only have a panning control of the laparoscope.The lack of zoom and tilt control complicates effective navigation during surgery.The system proposed by Noonan, for example, requires a foot-pedal which the user needs to press to activate.The need to introduce additional hardware such as foot-pedals can lead to instrument clutter in an already complex environment.Furthermore, existing eye-controlled platforms often use dwell time on fixed regions to indicate a user’s intention, which can be difficult to use in practice.Several methods have been developed using gaze data as a central component for intention recognition in a robotic system, for automatic laser targeting and adaptive motion scaling.In order to overcome these problems, this article introduces a gaze contingent robotic camera control system where the camera is activated via real-time gaze gestures rather than an external switch.Through the use of multiple gestures, which are statistically learned and can map to specific camera control commands, we show that it is possible to use different camera control modes such as panning, zooming, and tilting without interfering with the user’s natural visual search behaviour.The proposed system also incorporates a novel online calibration algorithm for the gaze tracker overcoming the need of an explicit offline calibration procedure.The proposed gaze gesture based human-computer interaction method differs from previous gaze based interaction methods such as the eye mouse, which uses dwell time to convey user intention of mouse clicking, or the Manual And Gaze Input Cascaded pointing method, which moves the cursor position in close proximity to the target location, but relies on the user to convey their intention with a small manual cursor movement and mouse click.The work presented here builds on the “Perceptual Docking” paradigm introduced by Yang et al., and extends initial work presented in Fujii et al.The key novelties of the work presented include; i) the capability to pan, zoom, and tilt the camera; ii) the ability to seamlessly switch between panning and zooming control by using the distance from the user’s face to convey the intention to zoom in or out; and iii) an implicit online calibration method for the gaze tracker, which overcomes the need of an explicit offline gaze calibration before using the gaze contingent system, offering a fast, frustration-free user experience of the system.Furthermore, an exhaustive evaluation of these novelties is presented, drawing data from extensive user studies.The article is organised as follows; Section 2.1 presents the gaze contingent laparoscope system, including the system design and implementation.The proposed online calibration method is detailed in Section 2.2.Finally, Sections 3.1 and 3.2 present a detailed evaluation of the gaze contingent laparoscopic control and the online calibration respectively.The aim of the proposed system is to provide an interface that will enable hands-free camera activation, allowing the surgeon to perform a bimanual task without the need for a camera assistant.Furthermore, in order to limit the cognitive burden on the surgeon, this interface must function without requiring additional foot-pedal hardware or using gaze dwell-time methods.This is achieved through the use of gaze gestures.Gaze gestures are based on fast eye movements, i.e. saccadic movements rather than fixations.They consist of a predefined sequence of eye movements.Gaze gestures can be single-stroke or multi-stroke.When the user performs the intended unique sequence of saccadic eye movements, a specific command is activated.The use of gaze gestures have previously been applied to eye typing, Human-Computer Interaction, mobile phone interaction and gaming.The key components of the proposed system are illustrated in Fig. 1.It comprises of a Tobii 1750 remote gaze tracker, a Kuka Light Weight Robot, a 10 mm zero degree Karl Storz rigid endoscope, and a Storz Tele Pack light box and camera.Two Storz Matkowitz grasping forceps and an upper gastrointestinal phantom with simulated white lesions were used for the evaluation of the system.Additionally, the laparoscope and surgical tools were tracked using an NDI Polaris Vicra infrared tracker during the experiments.The human eye is normally used for information gathering rather than to convey intention to control external devices.As such, the main challenge of using gaze gestures is distinguishing natural gaze patterns from intentional gaze gestures with high accuracy and precision.To this end, pattern recognition methods are necessary to learn these gaze gestures.The proposed system uses gaze gesture recognition based on Hidden Markov Model to learn multiple input commands from a surgeon in order to convey the desired camera control mode.Two possible gaze gestures are introduced to control the camera: activate camera and tilt camera.The activate camera gaze gesture is illustrated in Fig. 2.It is defined by the following three-stroke sequence of eye movements: gaze at the centre of the screen, then to the bottom right corner, then back to the centre, and finally back to the bottom right corner.The tilt camera gaze gesture is similar to the activate camera gaze gesture but the user is instead required to look at the bottom left corner of the screen, as shown in Fig. 2.Gaze gestures oriented towards the corner of the screen are chosen to prevent obstruction of the camera view, minimise the amount of screen space necessary, and reduce detection of involuntary gaze gestures.Text labels were placed at the bottom left and right corners of the screen to indicate the mode to be activated by performing a gaze gesture in that direction.Once a gaze gesture is identified, the robotic arm is activated and the user is able to control the laparoscope with his gaze.The user can also deactivate the gaze contingent laparoscopic control via gaze gesture.The control mechanism of the system is represented in Fig. 3.It is composed of two processes: a gaze gesture recognition process and a robot control process.The gaze gesture recognition process analyses inbound gaze data and identifies whether a gesture has been performed.The robot control process uses the Point-of-Regard data, i.e. where the user is looking at, to generate the robot trajectory.Finally, the user can stop the robotic camera control by fixating the stop camera text present at the bottom left corner of the screen during robotic control.The xy coordinates from the segmented trajectories of a potential gaze gesture are then clustered using a pre-trained k-means algorithm.Each cluster’s symbol number, centroid coordinates, and radius are used collectively to create a discrete codebook that captures the relevant features of the gaze gestures, i.e. the xy coordinates.The codebook was designed offline from 600 gaze gesture training data sequences, and five clusters were chosen for the k-means algorithm.Each potential gaze gesture sequence is encoded using the codebook, where symbol numbers are assigned to each observation by using the distance between the observation and the centroid of each cluster, provided that it is within the defined radius.If an observation is outside the feature space, it is discarded.In order to recognise the segmented potential gaze gestures, two left-to-right HMMs were used for each camera control activation mode.Unlike in Mollenbach et al., gaze patterns are not analysed here just to identify the nature of the gaze data, i.e. whether it constitutes a continuous motion, fixation or a stroke, but to classify which kind of gesture is being performed.The activate camera gaze gesture is modelled by HMM1 and enables panning and zooming control of the laparoscope.The tilt camera gaze gesture is modelled by HMM2 and enables rotation around the laparoscope’s longitudinal axis.Each of the HMMs model parameters was trained offline using a set of gaze gesture training data.During training, both intentional and unintentional gaze gesture data sequences were included in our data sets.The data sets were collected from twenty participants who did not participate in usability trials of the gaze contingent laparoscope system.Participants performed a gaze calibration procedure prior to the data collection, and the accuracy of the calibration was verified.Each participant provided 30 repetitions of each of the two types of intentional gaze gestures.The task during the gaze gesture data collection was to perform the three-legged gaze gestures whilst observing a black screen with white guidance dots in the middle and in the lower two corners of the screen.Participants were asked to perform a gaze gesture starting from the middle, moving to the corner, back to the centre and the back to the corner.The resulting training data consisted of 600 intentional gaze gestures for each HMM.Additionally, unintentional gaze gesture data was collected during a five-minute web browsing task.More specifically, the unintended gaze gestures data were collected whilst viewing a number of websites which consisted of image based content.Subjects were asked to spend five-minutes browsing the site while eye tracking data was being recorded in the background.The intention was to simulate random gaze behaviour.The data collected during this task was used in the learning phase of the HMMs in order to improve robustness of the HMMs against false positives.Each of the 600 intentional gaze gesture training sequences was encoded using the formulated k-means clustering codebook.An initial state probability is defined within this set of training data observations, and optimal state transition and emission probabilities that describe the set of training observations are iteratively obtained using the Baum–Welch algorithm.The initial state probabilities were randomly initialised between 0 and 1.To improve the trade-off between sensitivity, false positive rate and overall complexity of the system, a 10-fold cross validation was run across the HMM with different number of states).90% of the encoded training sequences are used with the Baum-Welch algorithm to iteratively obtain the HMMs parameters.With each of the seven training sets, a detection probability threshold was set at a 95% confidence limit of the training data sequences’ inference values, i.e. the probabilities of these sequences given the trained HMM.The recognition accuracy of the HMM is then defined by using this threshold on the remaining 10%, the validation data set.A six state HMM with an inference threshold of 0.7 was found to provide the best overall performance for both the activate camera and tilt camera gaze gesture detection.The rational of choosing this threshold value is apparent in the receiver operating characteristic curve of the 6 state HMM which is illustrated in Fig. 4, with the respective close up version shown in Fig. 4.From these figures, it is observable that the threshold of 0.7 provides the best trade off of sensitivity and false positive rate; with a sensitivity of 0.98 and false positive rate of 0.01 for the activate camera and 0.98 and 0.02 respectively for the tilt camera.The ROC curve illustrates the robustness of the three-stroke gaze gestures toward the edge of the screen; there is very little overlap between the visual search behavioural noise and the gaze gestures.Inference value histograms obtained from testing the 6 state HMMs with the unintended gaze gesture data are shown for the activate camera and tilt camera gestures in Fig. 4.As shown in Fig. 4, both HMM1 and HMM2 are able to clearly differentiate between activate camera and tilt camera gaze gestures, with virtually no overlap between gesture inference values.After obtaining the model parameters, the forward-backward algorithm is used to obtain the probability of the encoded gaze gesture sequence given the respective trained HMM.The recognised gesture is the one with the maximum inference value from the two HMMs, given that it is above the inference value threshold defined during the training.Once one of the gaze gestures is recognised, the noise reduced PoR is sent to the robotic arm in order to control it, otherwise no input is given to the robotic arm.The implemented control User Interface is illustrated in Fig. 5.On system initialisation, the camera is stationary and the system waits for a gaze gesture input from the user.The user has the option to control the camera via activate camera or tilt camera modes.Activate camera mode enables panning or zooming.It is activated by one gaze gesture and switching between the panning and zooming is enabled with a movement of the head forward or backward.This provides a combined pan and zoom control for surgeons to seamlessly control the robot.In the tilt camera mode, which is activated by a different gaze gesture, the system allows the camera to rotate the view around the laparoscope’s longitudinal axis.Guidance text is overlaid onto the camera view, and the camera can also be stopped by fixating the stop camera text at the bottom left hand corner of the screen).The stop camera command is identified by detecting dwell-time fixations of at least 750 ms in that region of the screen.In order to address the potential uncertainties when the eye tracker loses tracking of the user’s eyes, a safety mechanism is introduced where the robotic system immediately stops under lost gaze tracking circumstances.On re-detection of the user’s gaze, the robotic laparoscope resumes with the same control mode as before the tracking was lost.Conventional remote gaze trackers require an explicit offline calibration procedure to map the optical axis of the user’s eye to account for their visual axis.This process typically requires the user to fixate onto a moving spot presented on the screen.During this procedure a set of the user’s uncalibrated PoR coordinates at a number of predetermined locations on the screen, also known as calibration points, are recorded.The PoR of the user is then corrected using a mapping function which relates the captured PoR coordinates to their respective screen coordinates.A known problem of offline calibration reported in numerous cases is that the calibration can drift over time, thus affecting the accuracy and precision of the estimated PoR and potentially requiring the surgeon to recalibrate during surgery.This deterioration of the offline calibration is typically associated with naturally occurring changes in user’s postures or head positions.A quantitative study of drift during laparoscopic surgery is detailed in Appendix B.Given that the tilt and zoom control modes require explicit head movements, the need for online calibration is even more critical in the presented work.Previous gaze tracker systems have used as little as one calibration point with an accuracy of 1° of visual angle.However, these systems utilise multiple cameras and or light sources, and still require an explicit offline calibration that is susceptible to calibration drift.Calibration drift would not only lead to poor user-experience, but also raise safety concerns for use in the operating theatre.To overcome these problems, an implicit online calibration process that progressively adapts to the user’s changing gaze is introduced.Since the proposed online calibration process replaces the conventional offline calibration, the surgeon will be able to use the robotic laparoscope system immediately.Furthermore, the adaptive nature of the algorithm overcomes the calibration drift as it updates with continued use, thus allowing the surgeon to use the gaze contingent system for longer periods without recalibrating during an operation.The online calibration algorithm takes advantage of the pre-learnt gaze gesture information to extract relevant PoR coordinates in an ongoing manner to form and update the mapping function.The proposed online calibration algorithm can be applied to any remote gaze tracker system as long as it possesses user-interactive elements with known positions on the screen, such as menu navigation with the user’s PoR, eye typing, or others such as an automatic scroll mechanism during reading.Once user interaction is recognised, calibration points can be captured and used to remap the user’s PoR.Unlike in Chen and Ji, no assumptions are made on the content of the camera image for the online calibration to function.The presented online approach integrates seamlessly within the gaze gesture framework by taking advantage of the same probabilistic approach used to identify the gaze gestures.The gaze gestures require the user to look at the centre and one of the bottom corners of the screen.By extracting the PoR coordinates at these instances, the online calibration process uses these coordinates to populate the subject-specific calibration mapping on the fly.The assumption behind the online calibration is that the user is looking at specific areas located at the center and corners of the screen to perform the gestures.This assumption is made valid because the corner locations are made explicit with text describing the control mode, and because users are trained to use the gestures beforehand.This is further restricted by requiring gestures to be in certain quadrants of the screen, and forming a specific pattern with a particular orientation, i.e. of the form of a gaze gesture that was used to train the gaze gesture model.As shown in Fig. 6, the online calibration process first applies a median filter to the stream of unmapped PoR coordinates from the gaze tracker to reduce noise.Potential gaze gestures are then extracted in the same manner as in the gaze gesture recognition process in Fig. 3.At this stage the gaze gesture sequence consists of a series of unmapped and therefore inaccurate PoR coordinates.To determine whether a potential gaze gesture is not a false positive, principal component analysis is applied to the segmented potential gaze gesture.The majority of the trajectory’s information is contained in the first and second principal components, and in the form of an elongated diagonal.As such, gaze gestures with a PC1/PC2 ratio below a threshold of 5 are counted as false positives and filtered out.The angle of PC1 indicates the quadrant location of the potential gaze gesture.It is then possible to distinguish whether the segmented gaze gesture is associated with activate camera or tilt camera.If PC1 lies on the fourth quadrant, then it is related to the activate camera mode, while if it lies on the third quadrant then it is associated to the tilt camera mode.The absolute positions of the extracted PoR coordinates centroids from the gaze gesture are also stored to rule out inadequate gestures towards the upper left/right corners of the image.The stored coordinates are subsequently filtered by computing the centroid and standard deviation of each PoR coordinate within their respective buffers.The final PoR centroid to be used in the mapping function is recomputed excluding any coordinates that fall outside one standard deviation from the initial computed centroid.These centroids are then used in the calibration mapping to map the OA to the VA.The calibration mapping incorporated in the algorithm is a thin plate spline based radial basis function mapping.TPS, a special polyharmonic spline, was chosen for the gaze calibration mapping due to its elegant characteristics to interpolate surfaces over scattered data.The TPS was first introduced by Duchon, and has previously been used for various computer vision and biological data such as in image registration.Commercial eye trackers such as the one used in this paper are prone to user-specific errors in gaze tracking.As such these products typically require an additional calibration procedure to be performed by users prior to working with the eye tracker.The TPS method presented in this work effectively replaces the commercially provided additional calibration procedure with one of our own.Additionally, the proposed calibration procedure is implicit rather than explicit, thus making it much more user friendly.Further details on how the TPS is implemented as a mapping function can be found in Appendix A.A minimum of three calibration points will be needed to compute the TPS mapping.However, the gaze contingent laparoscope system utilises two gaze gestures, giving room to obtain between two and three calibration points only.Therefore, prior to using the calibration points for the online mapping, the final calibration points associated with the centre, bottom left and right corners of the screen are extrapolated to increase the number of calibration points to five points.Note that on initialisation of the system, there can only be two calibration points obtained from the first gaze gesture received from the user, i.e. either a activate camera or tilt camera gaze gesture.Therefore, symmetry along both the vertical and horizontal eye rotation is assumed and the two calibration points are extrapolated to five calibration points when the first gaze gesture has been successfully performed.This scenario is illustrated in Fig. 8.When both gaze gestures have been performed, the three calibration points are used to extrapolate to five calibration points as shown in Fig. 8.A conventional offline calibration procedure uses anything between five to nine calibration points to establish a gaze mapping function for accurate PoR estimation.Once five calibration points have been extrapolated, the relevant mapping parameters can be obtained by solving a linear system of equations together with the calibration screen coordinates shown in Fig. 8.Once a calibration mapping is formed, the previously segmented gaze gesture is remapped via the calibration mapping and tested against the two gaze gesture HMM models.If the gaze gesture returns an inference value above either of the two HMM thresholds, the extracted pupil coordinates are deemed accurate and are kept in respective circular buffers as calibration points, otherwise they are discarded.As the user continues to use the gaze contingent laparoscope system and inputs gaze gestures, the stored pupil coordinate points can be used to build a more robust gaze tracking calibration mapping, whilst also accounting for any calibration drift as the user moves around.The overall online calibration algorithm integrates closely with the gaze gesture recognition algorithm, enabling the surgeon to seamlessly start using the robotic laparoscope system without having to perform an offline gaze tracker calibration.The complete operative workflow is illustrated in Fig. 7, where a surgical resident performs a lesion removal task on an upper gastrointestinal phantom.In order to assess the accuracy of the gaze gesture recognition and examine the usability of the proposed system by comparing it to other methods, subjects were asked to perform the same task using three different camera control schemes:The proposed gaze gesture control: gesture-based mode activation and camera control through PoR and head position;,Pedal activated control: dual-switch foot-pedal mode activation and camera control through PoR and head position;,Camera assistant control: a camera assistant follows the verbal instructions of the participant and navigates the camera.In the foot-pedal control mode the activate camera and tilt camera modes are activated via the left and right pedals respectively.The pedals need to be kept pressed to maintain the chosen control mode, and the camera movement is stopped when the user releases the foot pedal.The laparoscope is navigated in exactly the same manner as shown in Fig. 5.The experimental setup is identical to the one described in Section 2.1.1.The HMM gaze gesture recognition process and the robot control process were implemented in C++.The gaze gesture recognition process ran at 33.3 Hz whilst the robot control process updated at 200 Hz.Experimental data which consisted of subject PoR, gaze gestures, and camera-view feed were recorded at a rate of 33.3 Hz.The surgical instrument tip trajectories were recorded with the Polaris at 17 Hz.During the camera assistant control mode the laparoscope tip position was tracked with a Polaris marker.In other modes it was instead obtained from the robot forward kinematics.Instrument trajectory tracking was undertaken as peer-reviewed literature has shown that instrument trajectory path length correlate to the level of surgical performance.In this usability study, seventeen surgical residents with a postgraduate year between 3–7 were recruited.The mean laparoscopic experience was 676 cases.All participants were trained to use the gaze gesture and pedal activated systems on an abstract navigation task before starting the study.This training was performed to prevent potential learning effects when performing the subsequent phantom based task.The abstract training task required the subject to navigate the laparoscope system inside a conventional box trainer to locate numbers in ascending order.The numbers of varying font sizes were placed randomly on a 4 × 5 grid in order to require the user to both pan and zoom during the training.Subject training was halted when: a minimum baseline proficiency task completion time was met; when they showed no further improvement in completion time; and, when they could reproduce a similar completion time consecutively on three occasions.A second training task was used for the tilt control modality of the camera.Subjects were asked to re-align three operative scenes to a conventional anatomical orientation.The task involved tilting the camera left, right, then left by 15°, 65° and 35° respectively.Once a scene was correctly re-aligned, the next scene was presented to the participant.The task involved subjects identifying and removing a set number of randomly placed lesions on an upper gastrointestinal phantom.The task was a simulated upper gastrointestinal staging laparoscopy and the phantom was placed in a laparoscopic box trainer.The nature of the simulated task required subjects to use a bimanual technique, typically with one instrument manipulating and/or retracting tissue, and the other grasping and removing the lesion.The surgeons were allowed and encouraged to physically look at the phantom model before lesions were placed to familiarise themselves with it.This procedure was introduced to minimise the potential confounding factor of learning the phantom model.Participants were asked to perform the lesion removing task twice, for each of the three camera control modes mentioned previously in Section 2.2, namely, via i) gaze gesture activation, ii) foot-pedal activation, and iii) verbal communication with a human camera assistant.Thus, overall each participant performed the lesion removal task six times in total.To mitigate learning effects on performing the lesion removing task, the sequence in which subjects performed the task during the three control modes was randomised.Prior to the user trials, the human camera assistant was given both hands-on and theoretical training over a period of two days on the experimental model by an expert laparoscopic assistant with over 1000 cases performed.The assistant was recalled a week later to confirm retention and proficiency on the experimental model before the study commenced.The assistant was kept constant for all participants.The average number of gestures assessed by the two raters was 313 for the activate camera gaze gesture, and 109 for the tilt camera gaze gesture.The usability performance of the new gaze contingent laparoscope system was based upon the results obtained from the following statistical analysis studies.HMM gaze gesture recall and false positive rate assessment.Comparative analysis of the three different control modalities.Comparative analysis of results from this study against a previously suggested system.The recall and false positive rate results are shown in Table 2.The overall average recall for the HMM based gaze gestures is 96.48% with an average false positive rate of 1.17%.The discriminability index d′ was 3.706 for the activate camera gaze gesture and 4.714 for the tilt camera gaze, which showed good robustness to visual search behaviour noise.The ICC from observing gaze gestures of both trials for all 17 subjects resulted in 0.957 and 0.912 for the activate camera and tilt camera gaze gestures respectively.This result shows strong inter-rater agreement, as coefficient values of greater than 0.8 are typically considered strong agreement.This was not surprising given that identification of the three-stroke gaze gestures was straightforward and unambiguous.More importantly, these results demonstrate that gaze gestures provide high recall and a low false positive rate, making the use of HMM based gaze gestures both user-friendly and safe.The comparative analysis of the three different control modalities uses the performance metrics from the combined data of both trials and is shown in Table 3.The experiment was a within-subject design, with all seventeen subjects completing two repetitions of all three control modalities.All seventeen subjects met the baseline proficiency and training requirements to be included in the user performance based quantitative analysis.Three of the subjects wore glasses and four wore contact lenses.In order to assess whether the gaze contingent system is able to cover a workspace volume comparable to that of a human camera assistant, the camera tip trajectories from the group of surgeons were combined into one point cloud for each control modality.The point clouds were used to obtain a surface mesh via Delaunay triangulation and the overall volume occupied by the camera tip workspace was then computed using the convex hull algorithm.As can be seen from the illustration in Fig. 11, all three camera control methods show a similar workspace volume of 2558.47 cm3 , 2556.40 cm3 and 2159.60 cm3 for the camera assistant, gaze gesture activated and pedal activated control schemes respectively.Each control scheme was also assessed for its contribution to the cognitive workload of the participant through the NASA-TLX questionnaire.A desired aspect of new technology introduced in the operating theatre is one which does not add to the cognitive burden of the surgeon.No statistically significant difference can be observed in the NASA-TLX score outcome for the gaze gesture activated control scheme relative to the camera control using a camera assistant.However, the foot-pedal activation method resulted in significantly higher NASA-TLX scores compared to the camera assistant mode.The change in the user’s balance and posture when required to use an additional limb to depress a pedal and activate the camera might be the cause of the disparity in the NASA-TLX scores.The overall group NASA-TLX scores are shown in Fig. 13.The final usability analysis involves a comparison of the system presented in this article to that presented by Fujii et al.Both systems use the same Kuka LWR arm and two gaze gestures to control the camera.Furthermore, the same panning and zooming speeds are used in both systems.The main difference between the two systems is the separation of the pan and zoom control in the previous work.In order to switch from panning the camera to zooming, the user would have to stop the camera and then perform a gaze gesture to switch to zoom control.In contrast, the system UI presented in this article combines the pan and zoom control into one activate camera control mode, where the user can switch between the panning and zooming by moving their head forward or backward.In addition, the new system enables an extra tilt camera control to rotate the camera view along the laparoscope’s longitudinal axis.The work by Fujii et al. had a subject group size of eleven participants with laparoscopic experience of 536 cases.The experience of the participating group of surgeons is comparable to the experience of the group of surgeons recruited for the user trials presented in this article, which is seventeen subjects with laparoscopic experience of 676 cases.A similar task to that in Fujii et al. was completed by the subjects.To analyse the between-subject designed user study, Brown–Forsythe F-tests were performed to check comparable variance of the comparison group data and subsequently Mann Whitney U tests were performed comparing the two systems.A summary of these results are presented in Table 4.From the Brown–Forsythe F-test, the only between-subject group pair that did not meet the equal variance criteria was the task time results obtained from our proposed gaze gesture modality and the task time from the gaze gesture modality of Fujii et al.The failure to show equal variance of these two grouped task time data implies that the task time obtained during the use of the proposed gaze gesture modality had a significantly different variance to the one obtained during the Fujii et al. method.Thus, this F-test shows improved consistency and speed in achieved task times by participating surgeons, when using the proposed gaze gesture control scheme.In contrast, as shown by the Mann Whitney U tests, the pedal activated control method showed no significant difference in task completion times.This result could indicate that our system, which enables quick switching between panning and zooming, is more ergonomic when the gaze gestures are used to activate the camera but not necessarily when the user is required to depress a foot-pedal.The subtle change in the user’s balance and posture when having to use another limb to depress the pedal and activate the camera might be the cause of the disparity.Surgeons who participated in this study were also asked for subjective feedback.Most of the feedback was positive, including comments such as that the panning control was working effectively and provided the advantage of maintaining a steady camera view and horizon compared to a camera assistant.Some surgeons expressed their preference for the gaze gesture activated system over the pedal system, with the opinion that it is easier to learn and to use, while the addition of the foot pedal to the gaze control increases their cognitive demand when moving the camera.On the other hand, some surgeons expressed that although the system is intuitive to use, they would prefer to use a human as that is what they are accustomed to.However these surgeons also expressed that they could see the benefit of the system especially for long operations which would be unpleasant for an assistant.Some surgeons felt that the stop camera location, which was at the bottom left corner of the screen conflicted with the usability as it causes the camera to move slightly while they fixate at the corner for 750 ms. Other feedback included some personalised preferences including a desire to have faster pan or zoom speed.The aim of this study was to assess whether the online calibration algorithm could calibrate “on the fly” with a range of different subjects and maintain a high level of accuracy and precision over time.Furthermore, the study compares the online calibration algorithm performance to when the gaze tracker is not calibrated, and when an offline 5 and 9 point calibration procedure is conducted.All gaze gestures performed were recorded for offline analysis to quantify the recall and false positive rate.As the purpose of this study was to assess the accuracy of the online calibration method through periodic checks, it was performed independently from the study presented in Section 3.1 which was meant to simulate an uninterrupted surgical scenario as closely as possible.A Tobii 1750 remote gaze tracker was used for the experiment.The online gaze tracker calibration process was implemented in C++ and operates at 33.3 Hz.The experimental data collected during the performance study consisted of subject PoR and the gaze gestures.Twenty-five subjects participated in the within-subject user study to assess the performance of the online calibration algorithm independently.All participants were trained to use the gaze gesture UI before starting the study.Each participant was required to successfully perform both gaze gestures ten times as training.Post training, each participant was asked to perform a calibration performance task under the following eye tracker calibration conditions: i) no calibration, ii) five point offline calibration, iii) nine point offline calibration, iv) after one gaze gesture, v) after two gaze gestures, vi) after five gaze gestures, vii) after ten gaze gestures.The performance task involved the participant observing one by one, nine evenly distributed white dots displayed on a screen.During this task, the participants’ gaze was recorded for offline analysis.Each trial was carried out over twenty minutes.The participant performed one gaze gesture, then the performance task, then was asked to take a five minute break by moving away from the desk.Subsequently, the same procedure was also repeated after performing two, five and ten gaze gestures in the same session.The gaze gesture count is accumulated to emulate the subject performing the calibration on the go.The study was executed in this manner to understand the online calibration’s longitudinal performance.The gaze gesture recall during the online calibration is also assessed to check if the online calibration adversely affects the gaze gesture recognition algorithm’s performance.The gaze gesture’s recall, false positive rate and the discriminability index d′, are quantified as in Section 3.1 by post-hoc observation of the recorded camera-view videos by two independent observers.The observers viewed the video sequences in the same order where their observations were compared for inter-rater reliability using the ICC.The average number of gestures assessed by the two raters was 136 for the activate camera gaze gesture and 129 for the tilt camera gaze gesture.The performance of the online calibration process is based upon the results obtained from the following studies:Comparative analysis of the online calibration against the offline calibration for accuracy and precision.HMM gaze gesture recall and false positive rate assessment.The comparative accuracy and precision performance experiment was a within-subject design, with all twenty-five subjects undertaking the respective calibration procedures and the performance recorded for each calibration technique.The comparative accuracy and precision performance of the online gaze tracker calibration algorithm compared to having no calibration and an offline calibration is summarised in Tables 5–7.From these tables, it is observable that the online calibration has consistent PoR estimation accuracy throughout the trial.Offline calibration methods have previously shown to deteriorate over time in accuracy performance which would in turn hinder gaze tracking techniques to be applied in the surgical theatre.During the fifteen to twenty minute duration of the trial, the gaze tracker’s calibration accuracy is maintained or even improved, which is a desirable attribute.In addition, the accuracy of the online calibration after one gaze gesture is high at 0.89° and after two gaze gestures, the online calibration is comparable to that of the five point offline calibration or nine point offline calibration respectively.The significance test’s p-values confirm this statement, as no significant difference was observed between the online calibration’s accuracy after two gaze gestures to that obtained from a five point offline calibration or a nine point offline calibration.As expected, in the absence of any calibration the accuracy is poor.This result is also consistent with the result of the statistical comparison tests against the online and offline calibration methods, where the accuracy improves significantly after the gaze tracker has been calibrated with any offline or online calibration method.Previous literature has highlighted drifting of the precision of offline calibration methods with time.In this study, we have shown from the statistical comparison tests that the online calibration algorithm is able to consistently achieve statistically indifferent precision to those of offline calibration techniques, with the added advantage that the online calibration algorithm maintains its precision through prolonged usage.The PoR estimation accuracy with respect to the location of each reference point for the group of 25 participants during different calibration methods are illustrated in Fig. 15.The distance of the lines represent the accuracy error from each reference point.From Fig. 15, it is clear that the accuracy of the PoR estimate can vary significantly within the group of subjects when there is no calibration, resulting in inaccurate PoR estimation.Fig. 15 and respectively show the PoR estimation accuracy during a nine point offline calibration and an online calibration after ten gaze gestures.The figures illustrate the improvement that can be achieved in the PoR estimation accuracy from having either an offline or online calibration.The last performance analysis of the online gaze tracker calibration algorithm is the evaluation of the recall and false positive rates attained during the use of the algorithm.The results are summarised in Table 8.The overall average recall for the HMM based gaze gestures is 96.81%, with an average false positive rate of 0.60%.The discriminability indices d′ for the activate camera and tilt camera gaze gestures were 4.384 and 4.426 respectively, therefore showing good robustness to visual search behaviour noise.The ICC obtained from observing videos of gaze gestures being performed during the online calibration performance assessment for all 25 subjects resulted in 0.946 and 0.954 for the activate camera and tilt camera gaze gesture respectively.The strong agreement between the two observers indicated by the ICC value greater than 0.8 is not surprising given that identification of the three-stroke gaze gestures were straightforward and unambiguous.These results are comparable to those obtained when the gaze tracker is calibrated offline, thus demonstrating that the online calibration algorithm does not affect the usability of the gaze gesture activated laparoscope system.Furthermore, the very low false positive rate for the detection of gaze gestures means it is highly unlikely for unintended eye movements, and therefore erroneous gaze gestures, to be used in the calibration.In this article, we have introduced a gaze contingent robotic laparoscope which allows for pan, tilt and zoom motion capabilities in Cartesian space.Gaze gestures were used to activate the different camera control modes, with a combined panning and forward-backward zooming control mode implemented using only head motions.Validation of the system showed that HMMs are effective in recognising gaze gestures, with mean experimental recall and false positive rate of 96.5% and 1.2% respectively.Results show that user intention can be separated from unintentional eye movements, therefore providing a means to communicate with the robotic laparoscope.Such a method could also be applied to other areas such as helping disabled patients communicate.A novel online gaze tracker calibration algorithm is also introduced.Experimental results show that the algorithm is able to obtain an accuracy of better than 1° of visual angle with one gaze gesture.Comparable accuracy and precision to that of a conventional offline gaze tracker calibration can be obtained after only two gaze gestures.The results from the calibration performance show that without a calibration procedure the PoR estimation accuracy is poor and can be prone to subject specific variation.During pilot studies it was noticed that a few subjects could have their gaze gestures recognised without calibrating the gaze tracker.However, the majority of subjects could not repeatedly perform recognisable gaze gestures without calibrating the eye tracker.Moreover, it would be undesirable from a clinical perspective to have to direct the camera with an inaccurate PoR estimation.The introduction of this online gaze tracker calibration removes the need to perform an offline subject-specific calibration before being able to use the gaze tracker, improving the surgical workflow.Furthermore, online calibration has the added advantage of constantly updating over time, thus avoiding calibration drift problem and resulting in an accurate PoR estimation.In addition, a comprehensive usability study involving seventeen surgical residents was conducted to assess the new gaze gesture activated robotic laparoscope system.Results demonstrated that once the group of surgeons learnt how to use the system, they were able to perform a surgical navigation task quicker, with a superior camera and instrument efficiency when compared to instructing a camera assistant or when using a pedal activated control scheme.The gaze gestures provide an effective means to convey the surgeon’s desired camera control method, and the seamless switching between panning and zooming in and out by leaning forward or backward are likely to have contributed to the improved user performance.Although pedals are commonly used in the operating theatre today, having to depress a foot-pedal can change the ergonomics of the operation.Analysis of the camera workspace occupied during the user trials demonstrated that the gaze contingent laparoscope system is able to navigate a similar working volume to that of a human camera assistant.The NASA-TLX scores indicated no signs of cognitive burden during the use of the gaze gesture control mode for the group of participants when compared to using the camera assistant or pedal activated control modes.This result therefore suggests that the group of participants did not feel the gaze gesture activated system to be a complex control scheme.However, although the NASA-TLX is a well validated tool for appraisal of subjective workload in general human factors research, the subjective nature of the questionnaire should still be taken into account when interpreting results.A comparison of the proposed gaze gesture activated method to other systems was also conducted.It has been shown that the technique has faster and more consistent task completion times with lower group variance.The pedal activated control schemes did not show any difference in terms of task completion time, but resulted in significantly higher NASA-TLX scores, indicating that the surgical residents felt a higher cognitive burden whilst using it.This result is perhaps due to the introduction of using head motion to zoom in and out, which could have caused an awkward posture when combined with the need to press a pedal to control the camera.Overall, the gaze contingent laparoscopic control performed well, allowing the surgeon to rapidly execute a bimanual task without requiring a camera assistant.Furthermore, the gaze gestures were easily learnt and used by the group of participants.However, while the studies presented in this work clearly show the usefulness of gaze contingent data in robotic surgery, some limitations were also highlighted.One such limitation stems from the eye tracking hardware used.In particular, when a surgeon is wearing thick framed spectacles, there is potential for the gaze tracker to have larger PoR estimation errors and experience trouble tracking the user’s gaze.This in turn can impact the gaze gesture detection algorithm.Another hardware limitation is the external workspace of the system.While the robotic arm - laparoscope setup used was sufficient to carry out the experiments comfortably, custom-designed hardware would be able to maximise the workspace available to the surgeon.Several additional improvements to the system can be made based on the lessons learned from the comprehensive studies performed, as well as the surgeon feedback received.For instance, adjusting the speed regions so that the speed is proportional to the distance of the PoR from the centre of the screen would allow for smoother and more intuitive transitions.Furthermore, surgical feedback included the desire to be able to manually tune the speed of the robot, in order to achieve a pace they are comfortable with.However, due to the nature of the gaze contingent control a compromise must be made between responsiveness and smoothness of motion.Future work will study the impact of using different control schemes for the gaze-contingent laparoscope control.Active constraints can be added to implement safety-boundaries for the robot workspace and machine learning can be incorporated into the online calibration algorithm to enable a more robust calibration algorithm.More gaze gestures can be added, for example towards the top corners of the screen, to create an immersive environment for the surgeon to switch on and off a number of surgical applications intra-operatively, e.g. patient specific visualisations to help localise tumours whilst further improving the online calibration accuracy.Furthermore, reinforcement learning techniques can potentially be introduced in the gaze gesture recognition process to improve the personalised recognition performance of the gaze gesture recognition process.Spatially invariant gaze gestures are another area under research to enable head mounted gaze trackers to be used in the surgical theatre.The use of head mounted gaze trackers would offer a larger workspace for the surgeon, as current screen based gaze trackers can only offer consistent tracking accuracy within 1–1.5 m from the gaze tracker.Lastly, assessing the gaze contingent system in multi-disciplinary team environments could also be of interest.
While minimally invasive surgery offers great benefits in terms of reduced patient trauma, bleeding, as well as faster recovery time, it still presents surgeons with major ergonomic challenges. Laparoscopic surgery requires the surgeon to bimanually control surgical instruments during the operation. A dedicated assistant is thus required to manoeuvre the camera, which is often difficult to synchronise with the surgeon's movements. This article introduces a robotic system in which a rigid endoscope held by a robotic arm is controlled via the surgeon's eye movement, thus forgoing the need for a camera assistant. Gaze gestures detected via a series of eye movements are used to convey the surgeon's intention to initiate gaze contingent camera control. Hidden Markov Models (HMMs) are used for real-time gaze gesture recognition, allowing the robotic camera to pan, tilt, and zoom, whilst immune to aberrant or unintentional eye movements. A novel online calibration method for the gaze tracker is proposed, which overcomes calibration drift and simplifies its clinical application. This robotic system has been validated by comprehensive user trials and a detailed analysis performed on usability metrics to assess the performance of the system. The results demonstrate that the surgeons can perform their tasks quicker and more efficiently when compared to the use of a camera assistant or foot switches.
487
Layering in peralkaline magmas, Ilímaussaq Complex, S Greenland
The Ilímaussaq Complex, ~ 1160 ± 2 Ma, is a peralkaline to agpaitic2 layered intrusion characterised by spectacular mineral assemblages.It forms part of the Gardar igneous province of South Greenland, which is comprised of: central complexes predominantly composed of syenite; mafic to intermediate dyke swarms; ‘giant dykes’ up to 800 m in width, composed of syenogabbro with granitic or syenitic centres; and basin fill sequences of clastic sediments with sub-aerial lavas.Many Gardar central complexes are marked by the development of igneous layering, in the form of modal layering, phase layering, cryptic layering and igneous lamination.The modal layering is often present as dominantly feldspathic sequences with subordinate mafic layers and a range of processes have been invoked to account for its development.These include: density segregation of the mafic crystals from the felsic during gravitational settling; in situ growth following intermittent suppression of feldspar nucleation; settling of ‘cumulate rafts’ through the host magma; and flotation of cumulates.The Ilímaussaq Complex has some of the most spectacular layering in the Gardar Province, within its agpaitic rocks.The complex is subdivided into an outer sheath of augite syenite, which encloses the alkali granite then pulaskite, foyaite and sodalite foyaite that form part of the roof series.The bulk of the complex comprises agpaitic nepheline syenites, which from the roof down are: naujaite; lujavrite; and the lowermost exposed rocks are ‘kakortokites’, which are the focus of the present study.Many of the rock names within the present study are repeated from early work on the complex, i.e. by Giesecke in 1806 and 1809 and Ussing in 1900 and 1908, thus many are unique.The term kakortokite refers to a sequence of repetitively layered nepheline syenites, distinguished by tripartite units.Each unit has a lower layer rich in arfvedsonite, overlain by eudialyte-rich rocks then by alkali feldspar-rich rocks.These rocks include REE-, Zr-, Nb- and Ta-rich eudialyte, which is currently attracting economic interest.The layering in the Ilímaussaq kakortokites is notably different from most layering found elsewhere in the Gardar Province, which is interpreted to be associated with F-rich, low viscosity magmas.Most igneous layering in the Gardar is characterised by fine layers with cross-cutting relationships in the form of channels and slumps, which are formed by convecting magmas.The layering in the kakortokites is by contrast remarkably homogeneous across the entire outcrop and cross-cutting relationships are absent, except at the margins of the complex.Layering is macrorhythmic and 29 outcropping units have been distinguished and numbered − 11 to + 17, relative to the best-developed unit, the Unit 0 marker horizon, which outcrops towards the middle of the layered series.Autoliths petrologically and chemically consistent with the roof rocks are incorporated within Unit + 3, and one particularly large example compresses the units below.Unit + 4 is draped over the autolith.These autoliths are inferred to have separated from the roof sequence during a single roof collapse event, providing evidence for development of the kakortokites through upwards accretion via large-scale processes that operated across the entire magma chamber floor.The mechanism for the formation of the layering in the kakortokite sequence remains undetermined.Most hypotheses invoke gravitational settling and density sorting.However, Pfaff et al. concluded that the thickness of the overlying magma in the chamber was insufficient for the segregation of the individual layers, of at least the upper units, through gravitational settling alone.The present study combines detailed petrographic analyses, crystal size distributions and eudialyte crystal compositions to provide an insight into processes of development of igneous rocks.The CSD method has been applied over the last ~ 30 years to mafic rocks to provide a quantitative insight into processes of nucleation and crystal growth during solidification."To the authors' knowledge, this is the first time the technique has been applied to agpaitic nepheline syenites, thus allowing for greater insight into the development of the kakortokites.Unit 0 was chosen for these analyses as it is a readily identifiable marker horizon across the kakortokite series, demonstrating that samples taken laterally across several km indeed relate to the same unit."Our study provides an insight into one of the world's most spectacular and enigmatic layered intrusions and provides an understanding of the mechanisms that operate in a low viscosity, volatile-rich agpaitic magma.Field data reported in this study were collected during fieldwork in the summers of 1999 and 2012, the latter with the assistance of the exploration company TANBREEZ Mining Greenland A/S.Samples are now in the collections of the University of St Andrews.Black, red and white kakortokites were sampled from Unit 0 at four locations, and samples were also taken across the underlying Unit − 1/Unit 0 boundary, to represent the centre to the margin of the layered sequence.Full sample descriptions are shown in Table 1 and depicted in Figs. SF1 and SF2 in the supplementary files.Field data reported in this study were collected during fieldwork in the summers of 1999 and 2012, the latter with the assistance of the exploration company TANBREEZ Mining Greenland A/S.Samples are now in the collections of the University of St Andrews.Black, red and white kakortokites were sampled from Unit 0 at four locations, and samples were also taken across the underlying Unit − 1/Unit 0 boundary, to represent the centre to the margin of the layered sequence.Full sample descriptions are shown in Table 1 and depicted in Figs. SF1 and SF2 in the supplementary files.The textures and thicknesses of each of the black, red and white layers vary from unit to unit.However, each is remarkably homogeneous along outcrop and in drill core.Unit 0 is noted for its conspicuous red layer, although black, red and white layers are present in all units.Unit 0 is well developed and regarded as representative of the units of the kakortokite sequence in general.Unit 0 is laterally continuous and can be traced for over 5 km; it is remarkably uniform in thickness, texture and composition across the entire outcrop.At all exposures, the boundary between the Unit − 1 white kakortokite to the Unit 0 black kakortokite is sharp and planar, whereas the intra-unit boundaries are gradational over 2 to 5 cm.The thickness of the unit is relatively constant across four outcropping locations and drill cores.It is 7.8 m thick, of which the first 0.7 m is black kakortokite, overlain by 1.9 m of red kakortokite and then by 5.1 m of white kakortokite.There is no evidence for magma flow in the centre of the kakortokite series, by which we mean pseudo-sedimentary indicators including scouring and flow indicators, shearing of crystals or current bedding.Very little evidence for flow is found elsewhere, except at the contact with the marginal pegmatite and in Unit + 3, which is associated with multiple roof rock autoliths.The black kakortokite has a foliated texture and comprises 60% arfvedsonite, 20% alkali feldspar, 15% eudialyte and 5% nepheline.The red kakortokite is saccharoidal in texture and comprises 40% eudialyte, 20% alkali feldspar, 20% nepheline, 10% arfvedsonite and 10% aegirine.The white kakortokite is typically foliated with the fabric identified by preferred alignment of alkali feldspar long axes approximately parallel to the unit boundary, although the fabric is less clearly visible than that in the black layer.It typically comprises 40% alkali feldspar, 20% nepheline, 10% sodalite, 10% arfvedsonite, 10% aegirine and 10% eudialyte.The white kakortokite of Unit 0 shows greater variation vertically through the layer.At the base, it is poikilitic with arfvedsonite oikocrysts, which enclose euhedral alkali feldspar and nepheline crystals; these oikocrysts decrease upwards and only occur in the lower 0.5 m of the unit.Sodalite, fluorite, aenigmatite and rinkite also occur within the kakortokites but are not analysed in the present study.Textural analysis, through CSDs, has been applied within this study as a tool for understanding the processes through which magma solidified and equilibrated to produce the final rock texture, i.e. the geometric arrangement of crystals.CSD analysis provides insights into processes of nucleation and growth in igneous rocks with plot shapes typically being attributed to specific processes.Log-linear slopes are associated with in situ crystallisation; slopes that curve upwards at large crystal sizes with gravitational settling; whereas slopes that curve downwards over the larger crystal sizes are inferred to form by crystal fractionation, i.e. the growth of crystals in suspension followed by enclosure in an upwards growing crystal pile.However, a wide range of processes contribute to the development of igneous rocks and the effects of competing processes must be considered when interpreting CSD plots, as the final texture reflects a combination of kinetic, mechanic and equilibrium effects.Initial nucleation in a magma is driven by kinetics, associated with either undercooling or supersaturation of the system.Two population models are cited within the development of igneous rocks: steady-state crystallisation and batch crystallisation.Steady-state crystallisation is rarely representative of geological systems, but is modelled to result in a linear relationship between population density and crystal size, although changes in crystallisation parameters, e.g. residence time or nucleation density, can result in complexity within CSD plots at small crystal sizes.Batch crystallisation applies CSD theory to growth within a closed system, thus is a better approximation of igneous systems.Modelling of nucleation within this system results in log-linear CSDs that systematically migrate to larger crystal sizes with increasing crystallinity.However, increasing crystallinity over 50% can result in curvature and slope complexity at the smallest crystal sizes.After nucleation and primary crystal growth, mechanical processes modify crystal populations through compaction, sorting and mixing.Mechanical compaction will affect the entire crystal population equally and has the effect of increasing the population density without modifying the CSD plot shape.Importantly, pure mechanical compaction is not inferred to contribute to the development of foliation fabrics within the rocks.Pressure-solution compaction can have a greater effect on the CSD plot shape as smaller crystals are preferentially dissolved, due to their greater surface energy to volume ratios, thus this process results in downwards-curvature of the CSD slope at small crystal sizes.Whilst this process has a similar effect on CSD patterns as coarsening, discussed below, the effects can be determined in the final rock textures through development of foliation.Sorting of crystals results from processes of crystal accumulation, which in this context is used to describe the process of separating crystals from magma via gravitational segregation, filter pressing, flow differentiation and/or crystallisation on conduit walls.Accumulation is often considered to produce a CSD that curves upwards at larger crystal sizes.However, the effects of this process via gravitational segregation on CSDs were modelled by Higgins who found that the initial stages of accumulation via gravity result in a log-linear CSD plot shape.With increasing crystal accumulation over time, the CSD plots retain their linear form, but rotate around the intercept.Minor curvature occurs as all crystals are precipitated, however this is reflected as a downturn at the smallest crystal sizes, importantly not as an upturn at the larger crystal sizes.Mixing of crystal populations occurs as a result of magma mixing or mingling.The effects of this process can be observed via strong kinks in CSD plot shapes, however mixing of two crystal populations with contrasting slopes and intercepts can produce a CSD with a steep slope at small crystal sizes, which lessens towards larger crystal sizes, i.e. has an upwards curvature due to the representation of two log-linear CSDs on a single plot.Further complications to plot shapes occur following growth of the two crystal populations, as the growth of the existing crystals will displace the CSD towards larger crystal sizes, but should preserve log-linear segments of the mixed CSD plot shape.Equilibration is one of the most important processes to consider as this determines the final amount and size of crystals.A variety of terms are used to describe processes of equilibration, including Ostwald ripening, textural maturation, crystal ageing and annealing.However, all these terms can be considered under the general bracket of textural coarsening.As the kinetic driving forces decrease, reducing the nucleation rate to zero, the system equilibrates through textural coarsening to minimise the total energy of the crystal population.This occurs through crystal growth as larger crystals are more stable, due to a lower surface area to volume ratio, than smaller crystals.Initial stages of growth occur at grain boundaries and are observed through modification of dihedral angles from original impingement textures to either solid-state equilibrium textures, with high median dihedral angles, or melt-mediated equilibrium textures with low median dihedral angles, when a single phase system is considered.Whilst several models for textural coarsening have been proposed, the most applicable to geological systems is the communicating neighbours model, which implies that growth rates are not only dependent on crystal size, but are also affected by individual crystal characteristics and the position of the crystal with respect to its neighbouring crystals.Combining this model of crystal growth with temperature cycling in the magma chamber indicates that temperature cycles of only a few degrees can result in macroscopic coarsening and produce CSDs similar to those observed in natural systems.The effect of this process on CSD plots is observed through a downturn at the smallest crystal sizes, with fanning of the CSD slopes at larger crystal sizes.Multiple processes develop the various CSD plot shapes and the competing and/or overprinting effects of kinetic, mechanic and equilibration processes must be considered when interpreting CSD data in the context of determining primary processes of rock development.CSD analysis provides a measure of the number of crystals of a mineral per unit volume, within a series of defined size intervals.Analysis of igneous rocks and, in particular, inferences about the effects of crystal settling were pioneered with studies of low viscosity basalts.Given the low viscosity of agpaitic magmas, similar inferences can be made from the CSDs of these magma compositions.This study calculates CSDs from thin sections via hand-digitised images following Higgins, using CSDCorrections v. 1.4.0.2.The data are plotted as population density against crystal size intervals and plot profiles are interpreted following the theory described in Section 3.1 with reference to Higgins and Marsh.The characteristic minerals from each layer are investigated.The other cumulus phases in the kakortokites form a small proportion of each unit, thus analysis would not provide statistically reliable insights into unit development.Only the characteristic minerals for each layer were used as they form the bulk of the rock and the high percentages of each mineral allow for a rigorous quantitative textural analysis.Thus, the data are chosen to optimise insights into the processes that contributed to the formation of the layers.To negate the effects of late-stage alteration, which particularly affects eudialyte and nepheline, relict eudialyte was only included where the original crystal outlines were preserved and could be confidently identified as pseudomorphs.Nepheline was excluded where altered, as the original crystal outlines were not preserved, thus nepheline data are not presented for all white kakortokites.Any minerals interpreted as intercumulus, i.e. anhedral crystal shapes and/or interstitial textures, were also excluded from the CSD analysis, as these do not provide insights into the primary processes of layer formation and development.Aspect ratios used for each mineral are based on crystal habits with reference to the CSDSlice database: 1:1.4:2.8 for arfvedsonite, 1:1.15:1.5 for eudialyte, 1:3.2:5.5 for alkali feldspar and 1:1.15:1.6 for nepheline.Any crystal alignment was determined through CSDCorrections and input to calculate CSD profiles.The smallest grain-size reported is inferred to be the lower limit of a sample as the smallest crystals of each studied mineral are easily visible.Accordingly, all grains in the sample were measured with the use of circularly polarised light.Use of CPL ensures that all minerals display their greatest birefringence and thus enhances the visibility of low birefringence minerals such as nepheline and feldspars.CSD data are presented in Fig. 4; the supplementary files contain the input and output data and the raw data.Each dataset, excluding the alkali feldspar plots, has a central region with a negative slope that visually approximates to a log-linear relationship, although some plots show minor curvature.The behaviour of smaller crystal populations is variable; some project along the central trend, others swerve up or down.The populations of larger crystals sometimes extend the central region but in other cases flatten.Whilst the central regions of the graphs are consistent, the behaviour of smaller and larger crystal populations is not a function of the mineral type, and even varies between different localities for the same mineral in the same unit.These trends do not comply with one simple process of crystal formation, but indicate multiple competing crystal growth mechanisms.CSD data are presented in Fig. 4; the supplementary files contain the input and output data and the raw data.Each dataset, excluding the alkali feldspar plots, has a central region with a negative slope that visually approximates to a log-linear relationship, although some plots show minor curvature.The behaviour of smaller crystal populations is variable; some project along the central trend, others swerve up or down.The populations of larger crystals sometimes extend the central region but in other cases flatten.Whilst the central regions of the graphs are consistent, the behaviour of smaller and larger crystal populations is not a function of the mineral type, and even varies between different localities for the same mineral in the same unit.These trends do not comply with one simple process of crystal formation, but indicate multiple competing crystal growth mechanisms.The Unit − 1 white kakortokite CSDs have log-linear central regions for both the alkali feldspar and nepheline, with downturns at the smallest crystal sizes.The alkali feldspar crystal sizes determined from CSDCorrections range from 1.5 to 9.7 mm, and the CSD slope value calculated from linear regression of the central region of the CSD is − 0.62 mm− 1.The nepheline crystal sizes range from 0.6 to 2.3 mm and the slope value is − 1.23 mm− 1.The CSDs for Unit − 1 white kakortokite sampled immediately below the Unit − 1/Unit 0 boundary have a range of plot shapes.The alkali feldspar CSDs for samples from Locations A to B are kinked with an R-squared coefficient between 0.97 and 0.98, whereas Locations C and D are relatively log-linear with R-squared coefficients from 0.99 to 1.00.Additionally, Locations A to C have upward kinks at the smallest crystal sizes, representing an increased population density, whereas Location D indicates a downturn instead.Nepheline was only analysed at Location C and the CSD has a flattening at the smallest crystal sizes, it has a log-linear plot shape with an R-squared coefficient of 1.00.The alkali feldspar crystal sizes range from 0.01 to 11.3 mm and the CSD slope values for the relatively log-linear central regions range from − 0.68 to − 0.81 mm− 1 with R-squared coefficients of 0.97–1.00.The nepheline crystal sizes range from 0.4 to 2.3 mm and the CSD slope value, excluding the flattened region, is − 3.00 mm− 1.The arfvedsonite CSDs for the samples immediately above the Unit − 1/Unit 0 boundary have similar slope profiles to the arfvedsonite CSDs from the central regions of the Unit 0 black kakortokite.The arfvedsonite crystal sizes range from 0.1 to 5.0 mm immediately above the boundary and between 0.2 and 5.0 mm in the centre of black layers.All CSDs, except U-1/0 Location C and U0 Location B, have kinks at larger crystal sizes: Locations A, A, B and D show kinked plots at 3.18 mm, 3.17 mm, 1.27 mm and 2.02 mm, respectively.The CSD plots for the boundary samples have slopes that curve upwards at the largest crystal sizes.The central portion of the Unit 0 black kakortokite shows a slope that curves upwards at large crystal sizes at Location A and a log-linear slope at Location B.The slope values for all CSDs range from − 3.73 to − 5.15 mm− 1.The CSDs for the eudialyte in the red kakortokite samples are log-linear with R-squared coefficients between 0.98 and 1.00.The crystal sizes range from 0.4 to 2.3 mm and the slope values for the central regions range from − 4.26 to − 5.20 mm− 1.The Unit 0 white kakortokite alkali feldspar CSDs have a range of slope patterns.Locations A to B have slopes that are concave-downwards over the range of crystal sizes with downturns at the smallest crystal sizes.Location C has a slope that is log-linear with an R-squared coefficient of 0.99.Location D was sampled from the first 0.5 m of the Unit 0 white kakortokite and displays the oikocrystic texture.Large arfvedsonite oikocrysts host euhedral crystals of nepheline and alkali feldspar.The CSD plot has a convex-upwards slope over the entire range of crystal sizes and an increased population density at the smallest crystal sizes.Despite the range of patterns, the crystal sizes and CSD slope values are similar between locations.The crystal sizes range from 0.2 to 9.5 mm and the central regions of plots have slope values ranging between − 0.61 and − 0.79 mm− 1.The nepheline data from the Unit 0 white kakortokite have log-linear slopes.The crystal sizes vary between 0.2 and 3.5 mm and the slope values for the central regions range from − 2.04 to 3.87 mm− 1.The Unit 0 white kakortokite alkali feldspar CSDs have a range of slope patterns.Locations A to B have slopes that are concave-downwards over the range of crystal sizes with downturns at the smallest crystal sizes.Location C has a slope that is log-linear with an R-squared coefficient of 0.99.Location D was sampled from the first 0.5 m of the Unit 0 white kakortokite and displays the oikocrystic texture.Large arfvedsonite oikocrysts host euhedral crystals of nepheline and alkali feldspar.The CSD plot has a convex-upwards slope over the entire range of crystal sizes and an increased population density at the smallest crystal sizes.Despite the range of patterns, the crystal sizes and CSD slope values are similar between locations.The crystal sizes range from 0.2 to 9.5 mm and the central regions of plots have slope values ranging between − 0.61 and − 0.79 mm− 1.The nepheline data from the Unit 0 white kakortokite have log-linear slopes.The crystal sizes vary between 0.2 and 3.5 mm and the slope values for the central regions range from − 2.04 to 3.87 mm− 1.CSD data were verified through comparing the phase percent calculated from the total area of the crystals and the CSD itself applying equations for both linear and curved CSDs where appropriate.The data are consistent, thus the CSDs are inferred to have been correctly determined from 2 dimensional measurements.A plot of slope vs. Lmax shows a positive correlation between the arfvedsonite, eudialyte and nepheline data, whereas the alkali feldspar data plot in a separate grouping.A plot of volumetric phase abundance vs. characteristic length displays a grouping of the arfvedsonite and eudialyte data, whereas the nepheline and alkali feldspar data are more variable.Backscattered electron imaging and electron probe microanalysis of targeted eudialyte crystals were performed at the University of St Andrews using the JEOL JXA-8600 superprobe in wavelength dispersion mode.Compositions were obtained using the operating conditions of Pfaff et al.: an acceleration voltage of 20 kV, a beam current of 20 nA and a defocused beam of 10 μm diameter to measure all elements.Count-times on peaks for major elements were 16 s and 30–120 s for minor elements.Background measurement times were half of the peak times.Na was measured first with a count time of 10 s on peak and 5 s on background to avoid atom migration from the analysed spot.A combination of natural and synthetic standards was used and corrections were internally performed using PAP.The term eudialyte is used throughout for the eudialyte-group minerals.Their formulae were calculated through normalisation of the sum of the Si + Zr + Ti + Nb + Al + Hf cations to 29 a.p.f.u.End-member eudialyte compositions were determined following the methodology of Pfaff et al.The eudialyte chemical data are presented in Figs. 6–7, the investigated elemental ratios in Table 2 and the raw data files are located in the supplementary materials, Table SF3.The eudialyte chemical data are presented in Figs. 6–7, the investigated elemental ratios in Table 2 and the raw data files are located in the supplementary materials, Table SF3.Compositions of the EGM range between eudialyte, kentbrooksite and ferrokentbrooksite.Eudialyte crystals from each rock type plot in a distinct grouping, from relatively Mn-poor compositions in black kakortokite to relatively Mn-rich compositions in white kakortokite.Analysing this using the Fe/Mn ratio further illustrates this variation in composition.The eudialyte in the black kakortokite has the greatest Fe/Mn ratios, between 10.85 and 13.34.The eudialyte in red kakortokite ranges from 7.23 to 10.55 and eudialyte in white kakortokite has the lowest Fe/Mn ratios between 5.77 and 9.98.Eudialyte from white kakortokite immediately below the boundary has similar Fe/Mn ratios to those from the Unit − 1 white kakortokite.There is a discontinuity across the Unit − 1/Unit 0 boundary as the eudialyte crystals from the Unit 0 black kakortokite immediately above the boundary have a larger range of, and reduced, ratios compared to those in the central regions of the Unit 0 black kakortokite layer.The range of Ca/ ratios exhibits little variation within error through the Unit − 1/Unit 0 boundary and Unit 0.There is a general trend of decreasing values upwards through Unit 0 and a small change in values occurs across the Unit − 1/Unit 0 boundary.The outlying low values are associated with eudialyte crystals that were shown by petrographic analysis to be partially pseudomorphed.There is a general decrease in Cl content of the eudialyte crystals upwards through Unit 0.There is a discontinuity in Cl content at the Unit − 1/Unit 0 boundary as the eudialyte crystals in the U-1 white kakortokite have lower Cl contents than those in the overlying black kakortokite of U0.Eudialyte crystals in the Unit 0 white kakortokite have the greatest variation between locations.The CSD slopes of the present study show a range of plot profiles, which fall into several common themes.All of the profiles are broadly log-linear but with modification at the small and large crystal regions reflecting a range of modifying influences.The details of each profile are given below.CSD plots from the black kakortokites at base of Unit 0 have different plot shapes compared to the plots from the central portion of the black layer.The CSD plots for the samples near the boundary display marked kinks, whereas the plots for the central portion of the layer are curved or log-linear.Kinked plots represent mixed crystal populations, but this can also result from a variety of processes.The CSD plot for Location A in the central portion of the layer provides more insight into those processes.This sample contains some large arfvedsonite crystals, which are euhedral and markedly larger than the groundmass, similar to the samples from the unit boundary.When these large crystals are excluded from the analysis, the resultant CSD is log-linear and indicative of formation of the groundmass arfvedsonite through kinetically controlled nucleation and growth, which is inferred to occur in situ at the interface between the crystal mush and the magma.The concave upwards curvature of the CSD plot for the entire sample is interpreted as the effects of mixing two crystal populations, i.e. the groundmass, which developed in situ, and the larger crystals.Due to the presence of larger crystals, accumulation processes, i.e. gravitational settling into the basal layer from the magma above, cannot be excluded from the development of the black kakortokite layer.The CSDs for red kakortokite samples are log-linear, indicating the bulk of the eudialyte formed through kinetically controlled nucleation and growth, inferred to be associated with in situ crystallisation."The minor curvature in some plots however, indicates that in situ crystallisation was accompanied by processes of density segregation with some crystals remaining in suspension, i.e. Marsh's fractional crystallisation, but this was a minor contributor to layer development.The white kakortokite samples have a range of CSD patterns that vary between localities and with stratigraphic height.This range of patterns is inferred to be associated with processes of accumulation following kinetically controlled nucleation and growth of crystals throughout the magma body.The lower portion of white kakortokite is characterised by CSD plots that have an upwards curvature over the larger crystal sizes.This is inferred to be associated with gravitational segregation and settling of alkali feldspar during development of the lower portion of the white layer.The CSD plots for samples from the upper portion of the white layer show curvature over the entire range of crystal sizes, which is also consistent with accumulation processes, but with alkali feldspar crystals remaining in suspension in the magma.Thus, processes of accumulation and gravitational settling were contributors to forming the white kakortokite layer.It should however be noted that the large alkali feldspar crystal sizes resulted in a reduced number of crystals within the section, thus fewer crystals were analysed.Therefore these interpretations are based on populations of < 200 crystals, below the minimum indicated for statistically reliable analysis, thus the slope patterns may not be a reliable indicator of the processes involved in the development of white kakortokite.The nepheline plots, however, are more consistent with the bulk of the nepheline crystals forming through kinetically controlled in situ crystallisation at the crystal mush–magma interface.The white kakortokite samples have a range of CSD patterns that vary between localities and with stratigraphic height.This range of patterns is inferred to be associated with processes of accumulation following kinetically controlled nucleation and growth of crystals throughout the magma body.The lower portion of white kakortokite is characterised by CSD plots that have an upwards curvature over the larger crystal sizes.This is inferred to be associated with gravitational segregation and settling of alkali feldspar during development of the lower portion of the white layer.The CSD plots for samples from the upper portion of the white layer show curvature over the entire range of crystal sizes, which is also consistent with accumulation processes, but with alkali feldspar crystals remaining in suspension in the magma.Thus, processes of accumulation and gravitational settling were contributors to forming the white kakortokite layer.It should however be noted that the large alkali feldspar crystal sizes resulted in a reduced number of crystals within the section, thus fewer crystals were analysed.Therefore these interpretations are based on populations of < 200 crystals, below the minimum indicated for statistically reliable analysis, thus the slope patterns may not be a reliable indicator of the processes involved in the development of white kakortokite.The nepheline plots, however, are more consistent with the bulk of the nepheline crystals forming through kinetically controlled in situ crystallisation at the crystal mush–magma interface.Further analysis of the CSD data through slope and Lmax indicates nucleation through a batch crystallisation model with a constant nucleation rate and, except the alkali feldspar, an effective growth rate that increases exponentially with crystal size.The alkali feldspar data indicate a constant growth rate.The kakortokites are often associated with a fabric defined by preferred orientations of the acicular or tabular minerals.There are three common methods through which a preferred orientation can develop within magmatic cumulates.A foliation fabric, without lineations, can be developed through settling of crystals in quiescent conditions, as during settling particles will become orientated with their principal cross-sectional area perpendicular to the direction of motion.Deposition in strong flow regimes can also produce a preferred orientation of crystals, due to shear stress, and lineations typically also develop.Compaction through processes of pressure-solution can produce a foliation fabric, typically with lineations, and can be distinguished by recrystallisation and deformation of crystals.As the fabric in the kakortokites is marked by the preferred orientation of crystals without the development of lineations or deformation of crystals, it is unlikely to have formed purely through pressure-solution associated with compaction.However, the alignment of alkali feldspar crystals is stronger in the black kakortokites than in the white, thus compaction is inferred to have contributed to the development of the fabric.The lack of flow indicators throughout the units also indicates that development of the fabric through shear stress associated with magmatic flow is unlikely.Further, alkali feldspar crystals with long aspect ratios lie across the Unit − 1/Unit 0 boundary and these are unlikely to have been preserved during magmatic flow.The fabrics throughout the black, red and white layers within Unit 0 are always associated with preferred alignment of alkali feldspar only.Other units may also have a fabric in the red and white kakortokites that is further defined by acicular arfvedsonite crystals.These subordinate phases were not investigated through CSD analysis due to the limited number of crystals within a section, thus accumulation of these crystals through gravitational settling cannot be discounted and indeed the CSDs indicate that alkali feldspar in the white kakortokites did accumulate through gravitational segregation.Therefore, the fabric is inferred to have primarily developed through sedimentation within a quiescent magma, but enhanced by late stage compaction as the cumulate pile grew during development of the layered kakortokite series and overlying rocks.The final process modifying the textures of the kakortokites is equilibration.The samples in this study exhibit textural features of coarsening including modification of apparent dihedral angles and increased curvature of crystal faces.Most of the CSD plots have a downturn at the smallest crystal sizes, and, although this feature can be accounted for by a range of processes, the petrographic indicators of coarsening reflect final modification of textures and CSD patterns through equilibration.This is inferred to have occurred through the CN model rather than purely through a process such as Ostwald ripening, which only accounts for micron-scale processes of crystal growth and dissolution.This later-stage process of grain growth is inferred to have occurred by grain-overgrowth, while the kakortokite layers were still in a ‘mushy’ state.This would have modified the CSD profiles, complicating the interpretation of the slope patterns.From this and the CSD slopes, processes of gravitational settling cannot be discounted from contributing to the development of Unit 0.However, we infer that processes of in situ crystallisation played the dominant role in the development of at least the black and red kakortokites of Unit 0.The subdivision of the kakortokite units into black, red and white layers is consistent with ‘normal’ density layering, i.e. the densest phase, arfvedsonite, is concentrated at the base of the unit, whereas the less dense phases, alkali feldspar and nepheline, are concentrated in the upper portions of the unit.This density grading led many authors to attribute the tripartite nature of the layering to density segregation of coevally nucleating and growing phases during gravitational settling.This theory is extended, at least during the development of the lowest exposed kakortokite units, to infer that sodalite was also a cumulus crystallising phase, which due to its low density, floated in the magma and formed the naujaite as a flotation cumulate at the roof of the chamber.This provides an elegant and simple explanation for the tripartite nature of the layering, but does not correlate with the textural data above, nor account for the repetition of the layering, as 29 units occur within the kakortokite outcrops.Thus, simple density sorting is untenable.It has been hypothesised that the layering developed through the formation of ‘crystal mats’.This hypothesis explains the regularity of the repetitive macrorhythmic layering, but does not account for the modes of development inferred in the present study.The ‘crystal mats’ hypothesis has three main requirements: the magma must not be convecting; the density of the magma must be such that arfvedsonite will settle, whereas alkali feldspar is buoyant; and the crystals can sink or float a significant distance, although not necessarily the entire height of the magma chamber, before they become trapped.In such models, the processes of gravitational settling and fractionation are key.Nucleation of each of the main minerals is contemporaneous and the layering is formed due to interference between the coupled processes of settling and rising crystals.This caused the densest phase to become trapped at certain horizons and the acicular shape of the arfvedsonite crystals contributed to the development of cohesive mats.This, however, does not correlate with the CSD data, which indicate that the bulk of the arfvedsonite crystals forming the black kakortokite developed in situ, rather than accumulating through gravitational settling.However, although Lindhuber et al. illustrated the formation of mats in terms of crystal settling and rising, the development of arfvedsonite mats through in situ crystallisation cannot be discounted.These ‘crystal mats’ would result in the development of magma layers, which are semi-isolated from the resident magma and act as barriers to settling/flotation.Each unit is inferred by Lindhuber et al. to have formed quasi-independently as a crystallisation cell composed of an arfvedsonite-rich mat with an overlying magma-rich crystal mush.Bons et al. on the other hand, infer that buoyant alkali feldspar crystals became trapped underneath a ‘mat’, contributing to the formation of white kakortokite layers, with or without a horizon of melt lying between the buoyant layer and the underlying mat.Importantly for the understanding of kakortokite development, the two models differ on the permeability of the ‘crystal mats’.Bons et al. infer that the mats would be porous enough to allow for the migration of melt during late-stage equilibration and/or compaction.Lindhuber et al. and Marks and Markl instead apply textural observations to infer that alignment of arfvedsonite crystals would make the ‘mats’ impermeable to both crystals and melt.The petrography and textures associated with the Unit − 1/Unit 0 boundary investigated in the present study do not corroborate this, as they instead indicate infiltration of melt to some extent below the unit boundary, which are inconsistent with impermeable mats.Whilst the ‘crystal mat’ hypothesis explains the regularity of the layering, other lines of evidence are inconsistent with it.For example, the autolith within Unit + 3 detached from the roof series and compressed the underlying units, whereas the overlying units drape over the autolith.These field observations indicate that the kakortokite layers formed sequentially from the bottom up and that the topmost layer was open to the magma chamber.In models where layering is formed by interference between linked processes with different rates operating in different directions, whether that is chemical diffusion or the physical competition between sinking and rising crystals, all form at the same time.In addition, there would have been long periods prior to complete solidification of the arfvedsonite mats, during which the density contrast between the mat and the underlying magma or feldspathic crystal mush would favour slumping of the arfvedsonite, in an analoque to soft sediment deformation.Our field studies show no evidence, even locally, for areas where the denser arfvedsonite has collapsed downwards.Furthermore, the mineral textures and chemistries reported by Lindhuber et al. and in the present study are inconsistent with the ‘crystal mats’ model alone.If the ‘mats’ resulted in the trapping of buoyant alkali feldspar crystals, then the top of each unit should resemble the naujaite, which is inferred to have formed as a flotation cumulate, with euhedral alkali feldspar crystals surrounded by oikocrysts of arfvedsonite and eudialyte.Alternatively, trapping of early formed euhedral eudialyte crystals that remained in suspension, which the CSD data indicate is possible, should preserve relatively primitive compositions in these crystals, as they would be trapped during the early stages of unit crystallisation.However, no such textures are seen at the top of any of the units; the only oikocrystic textures are observed at the base of the white layers.The units typically have sharp basal contacts, which is accommodated by the ‘crystal mat’ models, but the gradational contacts between layers within units are hard to reconcile with simple density sorting during gravitational settling.Additionally the eudialyte compositional data presented above and in previous studies do not indicate relatively primitive eudialyte crystals to be present at the top of any of the units.Instead the eudialyte crystals in Unit 0 are marked by a continuous evolutionary profile.Whilst the ‘crystal mat’ hypothesis explains the regularity of the layering, other lines of evidence are inconsistent with it.For example, the autolith within Unit + 3 detached from the roof series and compressed the underlying units, whereas the overlying units drape over the autolith.These field observations indicate that the kakortokite layers formed sequentially from the bottom up and that the topmost layer was open to the magma chamber.In models where layering is formed by interference between linked processes with different rates operating in different directions, whether that is chemical diffusion or the physical competition between sinking and rising crystals, all form at the same time.In addition, there would have been long periods prior to complete solidification of the arfvedsonite mats, during which the density contrast between the mat and the underlying magma or feldspathic crystal mush would favour slumping of the arfvedsonite, in an analoque to soft sediment deformation.Our field studies show no evidence, even locally, for areas where the denser arfvedsonite has collapsed downwards.Furthermore, the mineral textures and chemistries reported by Lindhuber et al. and in the present study are inconsistent with the ‘crystal mats’ model alone.If the ‘mats’ resulted in the trapping of buoyant alkali feldspar crystals, then the top of each unit should resemble the naujaite, which is inferred to have formed as a flotation cumulate, with euhedral alkali feldspar crystals surrounded by oikocrysts of arfvedsonite and eudialyte.Alternatively, trapping of early formed euhedral eudialyte crystals that remained in suspension, which the CSD data indicate is possible, should preserve relatively primitive compositions in these crystals, as they would be trapped during the early stages of unit crystallisation.However, no such textures are seen at the top of any of the units; the only oikocrystic textures are observed at the base of the white layers.The units typically have sharp basal contacts, which is accommodated by the ‘crystal mat’ models, but the gradational contacts between layers within units are hard to reconcile with simple density sorting during gravitational settling.Additionally the eudialyte compositional data presented above and in previous studies do not indicate relatively primitive eudialyte crystals to be present at the top of any of the units.Instead the eudialyte crystals in Unit 0 are marked by a continuous evolutionary profile.The inconsistencies between CSD, textural and compositional data and the ‘crystal mats’ model indicate that a different model must be considered for the origin of the layering in the kakortokites.Larsen and Sørensen proposed that the repetition of the layering resulted from a compositionally stratified magma chamber.Heat loss through the roof rocks developed a temperature gradient through the magma chamber, which resulted in the formation of concentration gradients and compositional stratification.Each unit is inferred to have developed due to loss of volatiles upwards, with the nucleation order being controlled by volatile element concentrations and degree of undercooling.High concentrations of volatile elements are suggested to supress nucleation within the magma body, except in a basal layer, which lost heat and volatile elements to the overlying layer.This allowed for nucleation and growth of minerals, initially arfvedsonite and eudialyte then alkali feldspar and nepheline with density segregation during gravitational settling enhancing the layering.A similar model was invoked by Bailey et al. for development of microrhythmic layering in the arfvedsonite lujavrite.They advocate crystallisation in a stagnant basal layer with alternate crystallisation of a dark layer and a light urtite layer controlled by variations in volatile concentrations and activities.A sequential change in volatile content and in particular the concentration of halogens is the preferred cause of the repetition of the layering.We adopt a modified version of the model of Larsen and Sørensen and propose nucleation in an aphyric basal layer of magma which is supersaturated in all of the mineral phases, but in which nucleation of crystals is inhibited."Following Marsh's magmatic principles, we infer that the layered kakortokite series did not form through a single chamber filling event and suggest that the basal magma layer formed from a replenishment event.Textural evidence for a replenishment event is observed through resorption of the largest alkali feldspar crystals in the Unit − 1 white kakortokite boundary samples, which demonstrates a change in the thermal and/or chemical regime between the crystallisation of Unit − 1 and the development of Unit 0.Additionally, the upturned kink in all alkali feldspar CSDs, except Location D, is indicative of a late-stage nucleation event that formed ‘pockets’ of small crystals.At Locations B and C, there is a small downturned kink between 1.1 to 1.8 mm and 0.9 to 1.4 mm, respectively, providing further evidence for a late-stage nucleation event that overprints previously coarsened crystals.We attribute this to a replenishment event that resulted in melt infiltration into the underlying crystal mush to a depth of less than 30 cm.The eudialyte compositions are consistent with this model as Fe/Mn ratios have a sharp discontinuity across the Unit − 1/Unit 0 boundary.The Fe/Mn ratios increase from the U-1 white kakortokite across the Unit − 1/Unit 0 boundary to a maximum in the black kakortokite samples from the central regions of the layer.Low eudialyte Fe/Mn ratios reflect formation from relatively evolved magmas, whereas high Fe/Mn ratios reflect formation from relatively primitive magmas.This indicates that the eudialyte crystals in the Unit 0 black kakortokite crystallised from more primitive magma than those in the underlying Unit − 1 white kakortokite, pointing to a change in magma composition at the Unit − 1/Unit 0 boundary.Within Unit 0 the Fe/Mn ratios show a continuous decrease upwards throughout the unit, reflecting crystallisation from a progressively evolving magma.The additional chemical data from the eudialyte crystals are supportive of an injection of a relatively primitive magma that was halogen-rich.The Ca/ and Cl contents of eudialyte crystals decrease during melt evolution and all data display a discontinuity at the Unit − 1/Unit 0 boundary with the highest ratios in the black kakortokite.The values then decrease upwards continuously through Unit 0.Injection of primitive magma will allow for development of a basal magma layer, due to compositional differences between it and the resident magma.In this model, the resident magma is inferred to have an agpaitic composition and be enriched in incompatible elements but not enriched in volatile elements.The eudialyte compositional data indicate that the injecting magma is richer in iron than the resident magma thus was more primitive and had a greater density.This contributed to pooling of the injected magma between the resident magma and the cumulate pile and to allow downwards percolation into the underlying cumulates.The lack of indicators of magma flow throughout most of the kakortokite series and the planar nature of the unit boundaries reflect an exceptionally quiescent resident magma throughout the development of the layered series, except during the roof collapse event.This reduces the potential for mixing of the injected magma with the resident magma, allowing for the formation of the basal layer.No evidence is found in the present study for xenocrysts within the kakortokites, indicating that the replenishing magma was aphyric.Thus, the injecting magma is inferred to have been saturated in each of the key components as well as being enriched in volatile elements.Whilst it is near impossible to determine accurately the thickness of the basal layer that developed each unit, we estimate the scale to be similar to the units.Since the units have a mean thickness of 7 m, we infer that they developed from a basal layer of magma ~ 10 m thick.It should however be noted that we observe variations in unit thicknesses from 2.5 m to 17 m and this may correspond to variations in the volume of injected magma.The resident magma is inferred to have always separated the developing kakortokite sequence from the naujaite, but the magma chamber is inferred to have undergone inflation during development of the rock sequences, thus the resident magma may have had a vertical thickness of a few hundred metres.The sharp boundary between Unit − 1 and Unit 0 formed from combined thermal and chemical erosion of a semi-rigid crystal mush as shown by the embayed alkali feldspar crystals at the unit boundary.Previous authors have noted structures within the layered kakortokites described as slumps.A full description of these rocks and their genesis is outside the scope of the present study, but they do not petrologically or chemically correspond to the kakortokites.Thus we infer that Unit − 1 was semi-rigid at the time of development of Unit 0, although the roof autolith in Unit + 3 indicates that ~ 20 m of crystal mush was unconsolidated enough to undergo compaction.The initial high concentration of halogens, as indicated by the Cl-enriched eudialyte crystals in black kakortokite, will inhibit nucleation of all mineral phases, resulting in supersaturation of each in the magma.As the basal layer of magma cools, due to thermal equilibration, arfvedsonite is the first phase to nucleate as it can crystallise at higher concentrations of volatile elements than the other phases.This main nucleation event of arfvedsonite primarily occurred in situ at the crystal mush–magma interface, potentially enhanced by epitaxial effects, whereas a smaller number of arfvedsonite crystals grew in suspension in the basal layer.Rapid growth, followed by settling of these crystals provided a crystal population that is notably larger than the crystals that grew in situ.These combined processes developed the black kakortokite layer of Unit 0 above the boundary with subordinate crystallisation of the minor phases.Crystallisation of the black kakortokite results in a decrease in the concentration of fluorine in the basal magma layer, as crystallisation of arfvedsonite takes F from the magma during crystallisation.Minor processes of upwards loss of halogens along concentration gradients, which develop as a result of the quiescent state of the resident magma and the coeval crystallisation of the sodalite-rich roof rocks, would also reduce the halogen concentration.This would facilitate the nucleation of eudialyte and a continuous change in halogen concentrations and desaturation in arfvedsonite developed the gradational boundary between the black and red kakortokites.Enhanced nucleation of eudialyte primarily occurred in situ at the crystal mush–magma interface, with minor nucleation of the other phases, and developed the red kakortokite.The concentration of chlorine decreased throughout the formation of red kakortokite.This is due to gradual equilibration of the basal magma layer with the resident magma, through loss of volatile elements due to crystallisation of the Cl-rich eudialyte and minor upward loss along a concentration gradient associated with sodalite crystallisation at the roof of the chamber.This, combined with desaturation of the melt in eudialyte, would allow alkali feldspar and nepheline to nucleate, both in suspension in the magma and in situ, developing the white kakortokite above a gradational boundary.This change in primary accumulation method is inferred to be occurring as the basal magma layer equilibrates with the resident magma body.At this stage nucleation occurred in a relatively halogen-poor magma with a density equivalent to the resident magma.The control on the order of nucleation may be directly related to the halogen content of the magma.High concentrations of halogens depolymerise silicate melts, reducing the length of the silicate chains that can crystallise.Arfvedsonite has a chain structure; eudialyte a ring structure; whereas alkali feldspar and nepheline are tectosilicates, thus the silicate connectivity of the mineral structure increases upwards through the unit.As arfvedsonite has the least complex structure, it may nucleate at high concentrations of halogens that inhibit crystallisation of the other phases.Arfvedsonite will take up fluorine from the magma during crystallisation, which would allow for crystallisation of more complex silicates, i.e. eudialyte.Crystallisation of eudialyte will take up chlorine from the melt, which again combined with upwards loss of volatile elements, would then allow alkali feldspar and nepheline to nucleate.Although Unit 0 is particularly well developed, the model here can be applied to the entire layered sequence.The present study observes that the general characteristics of each unit are remarkably consistent throughout the entire layered series indicating that each unit formed in a similar manner.Intra-unit chemical variations are described in the present and other studies.However, when considering a single layer, e.g. black, red or white, there is very little upwards variation in composition throughout the layered kakortokites.This consistency is ascribed in our model to replenishment by magmas with minimal compositional variations, allowing for crystallisation of units with similar chemical compositions.The discontinuities at unit boundaries between the underlying white kakortokite and overlying black kakortokite reflect replenishment events.The exact number of replenishment events required to develop the entire series is uncertain, but could be as many as 29, i.e. 1 per unit.Each unit in the model accretes upwards, through both in situ growth and accumulation through gravitational settling, with each unit building up from the underlying.Accumulation of the overlying units will compact the underlying units and contribute to the textural development through enhancing fabric development.CSD data are inconsistent with gravitational settling as the primary mode through which the macrorhythmic layering developed.Instead, in situ crystallisation and in situ crystallisation combined with density segregation, i.e. settling and flotation were the key processes that developed Unit 0.The key control on unit development was an oscillating volatile element concentration, which decreased during the development of a unit and sharply increased at the boundary to the next.An open-system model is proposed whereby a replenishment event formed Unit 0.An initial high concentration of volatile elements allowed for the formation of the black kakortokite through nucleation of arfvedsonite, whereas nucleation of the other phases was suppressed.This nucleation combined with the minor processes of equilibration of the basal magma layer with the resident magma and upwards loss of volatiles along concentration gradients to the roof, decreased the concentration of halogens.This enabled development of red and then white kakortokite above gradational boundaries from the underlying layer.This alternative nucleation model, controlled by variations in volatile element concentration, has been debated.However, our approach of combining field analysis and detailed petrographic studies with quantitative textural and mineral chemical analyses provides more data to support the hypothesis.The final textures observed in Unit 0 indicate that late-stage grain growth occurred through overgrowth, i.e. intercumulus enlargement, and processes of textural coarsening.This was promoted through variations in the degree of undercooling throughout the cooling history of Unit 0.It resulted in modification of the primary CSD profiles.Although Unit 0 is particularly well developed, the model presented here may be generally applicable to the entire sequence of layered kakortokite as the general characteristics of each unit are remarkably consistent.This study provides greater insight into the magma chamber dynamics operating during the formation of the kakortokite, as open system behaviour is indicated with periodic injections of magmas that are more primitive than the resident magma.Ilímaussaq is arguably the most celebrated agpaitic body, but these processes may be important to understanding the origins of other layered agpaitic rocks, e.g. Lovozero, Russia; Nechalacho, Canada; and Pilanesberg, South Africa.The following are the supplementary data related to this article.Arrows indicate way up where relevant, unit boundaries marked in red. Sketch map of southern portion of Ilímaussaq Complex, sample locations are arrowed. Unit − 1 white kakortokite section, Location D, sample EJH/12/081. Unit − 1/Unit 0 boundary section, Location A, sample EJH/12/009. Unit − 1/Unit 0 boundary section, Location B, sample AF/99/195. Unit − 1/Unit 0 boundary section, Location C, sample EJH/12/094. Unit − 1/Unit 0 boundary section, Location D, sample EJH/12/079, late stage cataclastic alteration band below boundary marked in orange. Unit 0 black kakortokite section, Location A, sample EJH/12/008. Unit 0 black kakortokite section, Location B, sample AF/99/192.NB all rock sections are photographed under circularly polarised light.Arfvedsonite can appear opaque in thick section. Unit 0 red kakortokite section, Location A, sample EJH/12/007. Unit 0 red kakortokite section, Location B, sample AF/99/193.Unit 0 red kakortokite section, Location A, sample EJH/12/007, note patchy alteration of eudialyte. Unit 0 red kakortokite section, Location C, sample EJH/12/095, note patchy alteration of eudialyte. Unit 0 red kakortokite section, Location D, sample EJH/12/080. Unit 0 white kakortokite section, Location A, sample EJH/12/010. Unit 0 white kakortokite section, Location B, sample AF/99/191. Unit 0 white kakortokite section, Location C, sample EJH/12/096, note alteration of nepheline groundmass. Unit 0 white kakortokite section, Location D, sample EJH/12/082, note the presence of arfvedsonite oikocrysts at bottom centre of section.NB all rock sections are photographed under circularly polarised light.Arfvedsonite can appear opaque in thick section.CSD input & output data,Eudialyte mineral chemistry analysed via EPMA,Supplementary data to this article can be found online at http://dx.doi.org/10.1016/j.lithos.2016.10.023.
The peralkaline to agpaitic Ilímaussaq Complex, S. Greenland, displays spectacular macrorhythmic (> 5 m) layering via the kakortokite (agpaitic nepheline syenite), which outcrops as the lowest exposed rocks in the complex. This study applies crystal size distribution (CSD) analyses and eudialyte-group mineral chemical compositions to study the marker horizon, Unit 0, and the contact to the underlying Unit − 1. Unit 0 is the best-developed unit in the kakortokites and as such is ideal for gaining insight into processes of crystal formation and growth within the layered kakortokite. The findings are consistent with a model whereby the bulk of the black and red layers developed through in situ crystallisation at the crystal mush–magma interface, whereas the white layer developed through a range of processes operating throughout the magma chamber, including density segregation (gravitational settling and flotation). Primary textures were modified through late-stage textural coarsening via grain overgrowth. An open-system model is proposed, where varying concentrations of halogens, in combination with undercooling, controlled crystal nucleation and growth to form Unit 0. Our observations suggest that the model is applicable more widely to the layering throughout the kakortokite series and potentially other layered peralkaline/agpaitic rocks around the world.
488
Reprint of “Investigating ensemble perception of emotions in autistic and typical children and adolescents”
Human perception will often seek the summary, the texture or the ‘gist’ of large amounts of information presented in visual scenes.Large amounts of similar objects, for example, some books on a shelf, or the buildings of a city may give rise to group precepts – the percept of a book collection or a city view.Properties of group percepts – whether a book collection is tidied up or not, whether a view belongs to an old or a contemporary city – seem to be accessible rapidly and effortlessly, and with little awareness of details differentiating individual elements.This ability to assess automatically the summary or ‘gist’ of large amounts of information presented in visual scenes, often referred to as ensemble perception or ensemble encoding, is crucial for navigating an inherently complex world.Given the processing limitations of the brain, it is often efficient to sacrifice representations of individual elements in the interest of concise, summary representations, which become available as the brain rapidly encodes statistical regularities in notions of a ‘mean’ or a ‘texture’.Ensemble perception has been demonstrated consistently for low-level visual attributes, including size, orientation, motion, speed, position and texture.More recently, studies have also demonstrated ensemble perception in high-level vision."In Haberman and Whitney's initial work on ensemble perception – and on which the current study was based, three adult observers viewed sets of morphs ranging from sad to happy.Observers were then asked to indicate whether a subsequent test face was happier or sadder than the average expression of the set, a task that required creating an internal representation of an average of facial expressions in the first set.The precision with which the three observers completed this task was remarkably good.In fact, two of the three observers were as precise in discriminating ensemble emotions as they were in identifying the emotions of single faces.In another task, the same observers viewed sets of emotional morphs and were subsequently asked to indicate which of two new morphs was a member of the preceding set.All three observers were unable to perform above chance in this condition, suggesting that observers were unable to encode information about individual face emotions, despite being able to encode seemingly effortlessly information about average emotions.Subsequent work has shown these effects for a range of facial attributes.Sweeny et al. have also shown that ensemble perception of size is also present, though not yet fully developed early in development, in 4–6 year-old children.In the primary condition of their child-friendly task, participants saw two trees, each containing eight differently sized oranges, and were asked to determine which tree had the largest oranges overall.A secondary condition included experimental manipulations that allowed for the empirical simulation of performance in the primary condition with no ensemble coding strategies available–that is, as if participants gave their response after comparing the sizes of a single, randomly-chosen orange from each tree.The difference in accuracy between the primary and secondary conditions provided an estimate of the extent to which participants benefited from the use of ensemble perception strategies, the ‘ensemble coding advantage’.They found significant ensemble coding advantages in both young children and adults, although children presented smaller such advantages than adults.An ideal observer model, which was also used to predict the minimum number of items integrated in the primary condition, suggested that both children and adults did not necessarily derive ensemble codes from the entire set of items, while children integrated fewer items than adults, consistent with the smaller ensemble coding advantage they exhibited.In the current study, we examined ensemble perception of emotions in autistic children and adolescents, and contrasted these with typical children, adolescents and adults.Autism is a highly heterogeneous neurodevelopmental condition known for difficulties in social interaction and communication.However, autism is also characterised by atypicalities in sensation and perception.Many studies have focused on the processing of social stimuli and of faces in particular.This literature presents a confusing picture.While many studies have reported that autistic children present pervasive difficulties in emotion discrimination, other studies have found such difficulties specifically for negative or more complex emotions or no difficulties at all.Prominent theories have suggested difficulties in social perception might be driven by fundamental problems in global processing or a local-processing bias that leads to strengths in the processing of simple stimuli and to weaknesses in the processing of more complex stimuli.We have suggested that the unique perceptual experiences of individuals with autism might be accounted for by attenuated prior knowledge within a Bayesian computational model of perceptual inference.This hypothesis posits limitations in the abilities of individuals with autism to derive, maintain and/or use efficiently summary statistics representations for the recent history of sensory input.Such limitations lead to a processing style where sensory input is modulated to a lesser extent by norms derived from prior sensory experience.Karaminis et al. have recently demonstrated this account formally, in the context of temporal reproduction, using a Bayesian computational model for central tendency, which suggested that the phenomenon reflects the integration of noisy temporal estimates with prior knowledge representations of a mean temporal stimulus.Karaminis et al. contrasted the predictions of this ideal-observer model with data from autistic and typical children completing a time interval reproduction task and a temporal discrimination task.The simulations suggested that central tendency in autistic children was much less than predicted by computational modelling, given their poor temporal resolution."Pellicano and Burr's hypothesis has also received empirical support from studies showing diminished adaptation in the processing of face and non-face stimuli.Such findings appear to generalise to ensemble perception, i.e., summary statistics representations derived on a trial-by-trial basis from stimuli presented simultaneously and for brief time intervals.Rhodes et al. have developed a child-appropriate version of a paradigm for ensemble perception of face-identity, which they administered to 9 autistic children and adolescents and 17 age- and ability-matched typical children.These authors found reduced recognition of averaged identity in autistic participants."In the current study, we evaluated two predictions, based on Pellicano and Burr, for the patterns of performance of autistic and typical children and adolescents by developing a developmentally-appropriate version of Haberman and Whitney's paradigm for ensemble perception of emotions.First, we predicted that autistic children should present difficulties in Task 1 assessing average emotion discrimination, evidenced by lower precision than typical children in the average relative to the baseline emotion discrimination task.We further tested this prediction using computational modelling and eye-tracking methodologies.Computational simulations should suggest a weaker ensemble coding advantage and fewer items sampled in autistic children compared to typical children.Eye-tracking data could also reveal atypicalities in the ways autistic children attended to the stimuli.Second, we predicted that autistic children should perform better than typical children in Task 3, identifying emotional morphs that had been previously presented to them.This advantage could be due to a greater reliance upon detailed representations of individual items, which are more important in this particular task, rather than on summary statistics.Finally, we also included a group of typical adults to examine developmental differences between children and adults in ensemble perception of emotions."We hypothesised that children were likely to show reduced abilities for ensemble perception compared to adults, similar to Sweeny et al.'s findings for the development of ensemble perception of size. "Participants' demographics are shown in Table 1.Thirty-five autistic children and adolescents aged between 7 and 16 years were recruited via schools in London and community contacts.All autistic children/adolescents had an independent clinical diagnosis of an autism spectrum disorder and met the criteria for an ASD on the Autism Diagnostic Observation Schedule – 2 and/or the Social Communication Questionnaire – Lifetime.Thirty typically developing children and adolescents, recruited from local London schools and community contacts, were matched with autistic children in terms of chronological age, t = 0.19, p = 0.85, as well as on verbal IQ, t = 1.34, p = 0.19, performance IQ, t = 0.02, p = 0.99, and full-scale IQ, t = 0.82, p = 0.41, measured by the Wechsler Abbreviated Scales of Intelligence – 2nd edition.All children were considered to be cognitively able.25 typical adults, aged between 18.70 and 44.40 years recruited from the University and community contacts, also took part.Four additional autistic children and 4 typical children were tested but excluded from the analysis due to poorly fitting psychometric functions.Two additional autistic children were excluded because their IQ scores were lower than 70.Stimuli were two sets of 50 faces created by linearly interpolating two emotionally extreme faces, one with a sad expression and one with a happy expression of a boy and a girl.The emotional extremes were chosen from the Radboud face database, based on their rankings of emotional intensity, clarity, genuineness and valence.Linear interpolation was performed using morphing software, placing 250 landmarks in each endpoint face.The two sets of faces were taken to establish two continua of 50 emotional morphs, from sad to happy.Similar to Haberman and Whitney, the distance between two successive morphs was one emotional unit, an arbitrary measure of the representation of happiness in successive morphs, assumed constant across the two continua and the same for the two sets.The saddest morphs were assigned an emotional valence value of 1, the happiest of 50, while the mean of the continua of 25.Each face subtended 5.19° × 4.16° of visual angle.Depending on the task, faces were presented in three possible configurations: i) in a passport photograph setup, i.e., as a group of four faces in a 2 × 2 grid, presented in the middle of the screen and over 10.81° x 8.79° of visual angle; ii) as reference stimuli on the left and the right hand corner of the screen, 10.30° left or right and 5.19° above centre screen; iii) in a 1 × 2 grid, subtending 5.19° × 8.79°.The experiments also included a centrally located fixation point, of grey colour and a diameter of 0.31° of visual angle.Stimuli were presented on a light grey background of a 15.6-inch LCD monitor with 1920 × 1080 pixel resolution at a refresh rate of 60 Hz.All participants viewed the stimuli binocularly from a distance of 55 cm from the screen.We wrote the experiments in MatLab, using the Psychophysics Toolbox extensions.All child/adolescent and adult participants were given three tasks: 1) an ensemble emotion discrimination task; 2) a baseline emotion discrimination task; and 3) a facial expression identification task.The order of presentation of tasks was counterbalanced across participants, as was stimuli gender.Tasks were presented in the context of a child-friendly computer game, in which participants competed with characters from a popular animated movie in activities involving judging emotions of clones of a boy or a girl or identifying clones who had been presented to them before.In this task, participants were told they would see a sad and a happy face appearing in the left and the right corner of the screen, correspondingly, and four different faces appearing near centre-screen for a limited time.They were instructed to indicate whether the four clones were overall more like the happy or the sad clone using the keyboard.The experimenter used hand gestures to indicate the notion of ‘overall’.As shown in Fig. 1, each trial began with the reference stimuli presented near the two upper corners of the screen, along with the four faces in a 2 × 2 grid, presented in centre-screen for 2 s.The reference stimuli remained on screen for the duration of the trial.Faces in the grid were all different from each other, separated by a standard distance of 6 emotional units.This meant that the emotional mean of each set was 9 units higher than the saddest face and 9 units lower than the happiest face in the set.The task comprised 6 practice trials and 80 test trials.Practice trials familiarized the participant with the procedure and tested the following emotional means: 10, 40, 15, 35, 40 and 10.Feedback was given.Practice trials were repeated if the participant produced incorrect judgements in at least two trials.The 80 test trials comprised 5 repetitions of 16 values of tested emotional means: 10, 12, 14, …, 40.No feedback apart from general positive encouragement was given to participants.This task was identical in procedure to the average emotion discrimination task, including 6 practice trials and 80 test trials.In this task, however, the four faces in the 2 × 2 grid were indistinguishable.This implied a zero variance in which the emotional valence of the four faces coincided with the tested mean.We used four identical faces rather than just one face to achieve similar levels of perceptual complexity across the two tasks.Participants were told they would see a sad and a happy clone appearing in the left and the right corner of the screen, correspondingly, and four identical clones appearing near centre-screen for a limited time.Participants were instructed to indicate whether the four clones were more like the happy or the sad clone using the keyboard.Practice trials tested the following emotional means: 10, 40, 15, 35, 40 and 10 and included feedback.They were repeated in the case of incorrect judgements in at least two of these trials.Test trials included 5 repetitions of 16 values of tested emotional means: 10, 12, 14, …, 40.No feedback was given.In this task, participants were told they would see four faces appear on the screen and then disappear.Two more faces would then appear.Participants were instructed to indicate which of the two faces was present in the group of four faces by making a corresponding keypress.As shown in Fig. 1, trials began by presenting a 2 × 2 grid of four different faces in centre-screen.These faces differed by 6 emotional units, i.e., similarly to the average emotion discrimination task.The emotional mean of the faces in the grid ranged from 10 to 40 with an increment of 2 emotional units.After 2 s, the first set of faces disappeared and a new set of two faces was shown in centre-screen.A target face in the second set was also a member of the first set of faces while the other was a distractor.The distance between the two faces in the second set could take one of three values: 3, 15 or 17 emotional units.There were 6 demonstration trials and 80 test trials.Demonstration trials used target-distractor distances of 20 and 15 in this order: 20, –20, +15, –15, –20, +20, combined with the following emotional means values: 10, 40, 35, 15, 10, 40.These were repeated for five autistic children and four typical children.Of the 80 test trials, 26 tested a target-distractor distance of 3 emotional units, 27 tested a distance of 15 emotional units, and 27 tested a distance of 17 units.Similar to the other two tasks, test trials considered 5 repetitions of 16 emotional means, assigned randomly to testing trials.Children were tested individually in a quiet room at the University, at school or at home, and adults were tested in a quiet room at the University or at home.Testing lasted around 30–40 min.We collected eye-tracking data using a Tobii-X30 eye tracker, with a five-point calibration procedure repeated prior to each task.The WASI-II and the ADOS-2 were administered in later sessions."The University's Faculty Research Ethics Committee approved this study. "Adults gave their informed written consent and parents gave their consent for their child's participation prior to taking part.For ensemble and baseline emotional discrimination, we fitted individual data from participants with bootstrapping with 200 repetitions and a ‘maximum likelihood’ fitting method.From the fitted curves we derived precision thresholds for each condition.We conducted a mixed-design ANOVA on these measures with condition as a repeated measures factor, and group as a between-participants factor.For facial expression identification, we measured accuracy in the three conditions of the tested distance.We examined whether these measures differed from chance performance with two-tailed t-tests and examined differences across groups and conditions by conducting a 3 × 3 mixed-design ANOVA.We also examined correlations between the so-obtained measures and age and performance IQ in the two groups of children, as well as correlations with measurements of autistic symptomatology in the group of autistic children, and correlations between precision thresholds in the two conditions in all groups."We calculated Pearson's linear correlations with permutation tests and correcting for multiple comparisons using the “max statistic” method to adjust the p-values.This method controlled for the family-wise error rate without being as conservative as Bonferroni correction.We also used Fisher tests to assess whether correlations differed significantly between groups, adjusting alpha levels for multiple comparisons with the Sidak method1/N, with α = 0.05).Fisher tests were therefore conducted with adjusted alpha levels of 0.008 per test."Participants' eye movements were analysed to provide additional insight into the way participants attended to the stimuli.We obtained usable eye-tracking data for 27 autistic children, 17 typical children and 14 adults.For these participants, we focused on trials where fixations were detectable at least 90% of the time, around 50% of the trials for all groups.We analysed recordings for these trials by deriving a scanning path for each participant and each trial.A scanning path was defined as a sequence of fixations in one of the four regions-of-interest, the square areas where the four facial expressions were shown on screen.A participant was taken to fixate in a given region if gaze remained in that region for more than 150 ms, otherwise these data were not included in the scanpath.For the scanning-path length and the mean number of samples scanned by a given participant in each task, we conducted mixed-design ANOVAs with task as a repeated measures factor, and group as a between-participants factor.Our computational modeling aimed to assess the amount of information that participants used in the ensemble emotion discrimination task.This is akin to the approach in Sweeny et al. on ensemble perception of size."Sweeny et al. included a control condition that allowed for the behavioural simulation of participants' abilities to perceive average size with no ensemble coding strategies available.Contrasting performance in this condition with performance in the principal condition, where participants were required to employ ensemble perception strategies, yielded the ensemble coding advantage.The ensemble coding advantage essentially measured the extent to which participants utilized ensemble perception strategies.Sweeny et al. also considered ensemble perception advantages predicted by ideal-observer models that assumed pooling of different amounts of items.They contrasted modelling results with human data to predict the number of individual items that participants integrated in the ensemble perception task in that study."Our computational modeling work aimed to perform a similar analysis and provide two measures characterizing the performance of individual participants: 1) the ensemble coding advantage, and 2) the number of samples that best accounted for the participant's actual performance in ensemble emotion discrimination.Furthermore, our computational modeling aimed to contrast the performance of different groups in ensemble emotion discrimination given their baseline emotion discrimination abilities.This was akin to the modeling approach in Karaminis et al., who assessed the amount of central tendency in temporal interval reproduction in autistic and typical children and adults, taking into account their temporal resolution abilities.This study showed that the patterns of performance of autistic children in time interval reproduction/discrimination were closer to the predictions of a computational model employed attenuated prior knowledge representations of a mean interval.Here, the modeling aimed to assess whether the patterns of performance of autistic children are suggestive of less reliance on ensemble coding or of the integration of fewer items.The perceived ensemble facial expression was then categorised as happy if it was higher than the point of subjective equality in the fitted psychometric curve for this participant in baseline emotion discrimination and as sad otherwise.The ideal observer model assumed no noise in the integration process per se.The integration of the noise-perturbed emotionality values for faces in the sample was therefore perfect, implying that the simulated precision of participants in average emotion discrimination was a lower-bound estimate, corresponding to optimal performance.The ensemble coding advantage essentially contrasted the precision of a given participant in average emotion discrimination with the precision that the same participant would exhibit in this task if s/he responded after randomly sampling a single face from the test sets."Second, and in a complementary analysis, we used the simulated precision values in all four ideal observer models for a given participant to estimate the number of samples that best accounted for the participant's actual performance in average emotion discrimination.This was done by fitting an exponential curve to the precision values obtained from the ideal observer models with N = 1, 2, 3 and 4, and then identifying the value of N that corresponded to the precision of the participant in average emotion discrimination in that curve.Individual data from participants were well fit by cumulative Gaussian functions.A preliminary analysis showed no effect of gender on performance in any task so data were collapsed across stimulus gender."First, we looked at participants' precision in ensemble and baseline emotion discrimination in autistic and typical children and adults.Fig. 2 shows precision thresholds, given by the standard deviation of the fitted cumulative Gaussian functions, for the three groups in the average and baseline emotion discrimination tasks.We conducted a mixed-design ANOVA with condition as a repeated measures factor, and group as a between-participants factor.There were significant effects of condition, F = 32.55, p < 0.001, np2 = 0.27, and group, F = 6.28, p = 0.003, np2 = 0.13, but no condition x group interaction, F = 1.75, p = 0.18, np2 = 0.04.The analysis therefore suggested that, unlike Haberman and Whitney, precision in ensemble emotion discrimination was worse than precision in individual emotion discrimination.This pattern was identical across groups.Planned contrasts suggested significant differences in precision between adults and typical children, t = 0.95, p < 0.001, consistent with Sweeny et al.Contrary to expectations, there were no significant differences in precision between autistic and typical children.Next, we investigated within-group variability in ensemble emotion discrimination in autistic and typical children.An examination of age-related improvements revealed no significant correlations between precision thresholds and ensemble emotion discrimination in typical and autistic children ."However, autistic children's precision thresholds in ensemble emotion discrimination were highly correlated with their WASI-II Performance IQ scores , a relationship not found in typical children .Fisher z-transformation tests suggested that the correlations between ensemble perception thresholds and age did not differ significantly in the two groups of children, while the correlations between the ensemble perception threshold and Performance IQ was not different in the two groups of children.No systematic relationships between precision thresholds in baseline emotion discrimination and chronological age or Performance IQ were found in either typical or autistic children.We also examined correlations between precision thresholds in ensemble and baseline emotion discrimination.These precision measures were strongly and positively correlated within the autistic group , but not for typical children or adults .However, these correlations were not significantly different in autistic and typical children.Finally, within the autistic group, there were no significant correlations between autistic symptomatology, as measured by the ADOS-2 and SCQ, and precision thresholds in baseline and ensemble emotion discrimination."Similar to Haberman and Whitney, we evaluated children and adults' accuracy in identifying morphs previously presented to them for the three conditions for the target-distractor emotional distance.Accuracy rates for the three groups are shown in Fig. 4.As expected, and consistent with Haberman and Whitney, accuracy was at chance for test stimuli with a target-distractor distance of 3 emotional units for all three groups .Unexpectedly, however, performance was above chance for test stimuli with distances of 15 or 17.We examined group differences in accuracy in the three conditions of the face-identification task by conducting a mixed-design ANOVA.There were significant effects of condition , group, F = 9.89, p < 0.001, n2 = 018, but no condition x group interaction .Planned comparisons suggested significant differences in accuracy between adults and typical children, t = 0.06, p = 0.001, but, crucially, no differences were found between autistic and typical children.Examination of age-related improvements or improvements with Performance IQ revealed no significant correlations in emotional expression identification.There were also no significant correlations between autistic symptomatology and accuracy in emotional expression identification.Fig. 5 shows the calculated ensemble coding advantages for the three groups .Ensemble perception advantages were significant for all three groups .Unexpectedly, there was no main effect of group, F = 1.79, p = 0.17.Planned contrasts suggested that adults did not present a greater ensemble coding advantage compared with typical children, t = 1.50, p = 0.13, and, importantly, there was no significant difference between the two groups of children, t = 0.13, p = 0.90.Fig. 6 presents precision in average emotion discrimination of the three groups along with the simulated precision obtained from the ideal observer models with N = 1, 2, 3, 4.The red lines connect model-predicted precision based on the data of individual participants.Fitting of an exponential curve to the model data yielded a non-integer N value representing the mean number of different emotional expressions sampled by a given participant in the average emotion discrimination task, according to the ideal observer model.Fig. 7 shows this measurement for the three groups.These were all significantly greater than 1 .A one-way ANOVA revealed no significant effect of group, F = 0.65, p = 0.52, suggesting that the model predicted no difference between the three groups in terms of the faces sampled in ensemble emotion discrimination.Thus, the two model-based measures of ensemble perception did not present between-group differences as those found for precision in average emotion discrimination.However, the two model-based measures presented different patterns of within-group individual variability in autistic and typical participants, which were, importantly, largely consistent with patterns found in the empirical data for precision in average emotion discrimination.Ensemble coding advantage was highly correlated with age in typical children , but not in autistic children , though such a contrast was not present for the number of sampled faces .The two model-based measures were also highly correlated with Performance IQ within the autistic group , but not in the typical group .Although Fisher tests suggested that there was no difference in the correlations between chronological age and ensemble coding advantage in the two groups of children, importantly, they showed that the correlations between Performance IQ and the two modelled based measures were significantly different between the groups.These correlations are shown in Fig. 3.The between-group difference in the correlations between Performance IQ and ensemble coding advantage retained its significance within the adjusted alpha level when the outlying ensemble coding advantage of a typical participant was trimmed to 2 SD from the mean,.Finally, autistic symptomatology did not correlate significantly with the model-based measures of ensemble perception .Fig. 8 demonstrates the average number of different faces that the participants looked at in trials of the three tasks, for the three groups.A mixed-design ANOVA showed a significant quadratic effect of task on the number of faces sampled, F = 51.17, p < 0.001, np2 = 0.4, but no significant effect of group, F = 1.81, p = 0.17, np2 = 0.06, and no significant interaction between group and task .Therefore, the three groups were indistinguishable in terms of the number of different morphs they sampled across the trials of the three tasks.They also presented a common pattern in which the number of different faces sampled was slightly higher in the average emotion discrimination than in the baseline emotion discrimination and the face-identification task.Finally, we examined individual variability within the two groups of children with respect to eye-tracking variables.This analysis showed no systematic relationships between the way autistic or typical children attended to the stimuli and age or Performance-IQ and no significant correlations with autistic symptomatology in the autistic group.A large body of empirical research has demonstrated the abilities of human perception to rapidly and automatically extract the summary or the gist of large amounts of information presented in visual scenes, also referred to as ensemble perception.We hypothesised that this fundamental ability for ensemble perception might be compromised in autistic children, who are held to present limitations in forming, accessing and/or using efficiently summary statistics representations for the recent history of their sensory input.Our hypothesis yielded two testable predictions: that autistic children should present worse precision than typical children in a task involving ensemble perception of emotional morphed faces; and autistic children might be more accurate than typical children in tasks that involve identification of individual faces.In direct contrast, we found no differences between autistic and typical children in terms of their precision in ensemble and baseline emotion discrimination, and in their accuracy in face identification.Our results showed that, relative to typical children, autistic children presented neither a limitation in ensemble perception nor an advantage in face identification.The two groups also did not differ in ensemble coding advantage and the number of samples integrated in each task, as suggested by the computational model.Eye-movement data further corroborated these findings: autistic and typical children looked at the same number of faces per trial on each task.Our analysis therefore showed that, on average, autistic and typical children performed largely similarly on our paradigm.To examine further performance in ensemble emotional expression discrimination in isolation from baseline emotion discrimination, our study used computational modelling.Computational modelling suggested significant ensemble coding advantages for all three groups and that all groups integrated more than one face to determine the average emotion of a set.However, the three groups did not differ in these model-based measures."It is important to note that our modelling approach was conservative and the estimates of the participants' ensemble coding advantages and the number of faces they integrated in average emotion discrimination was lower-bound.While the model simulated baseline emotion discrimination taking into account estimates of noise derived from the baseline emotion discrimination task, it did not include any late-stage noise in the integration process, like in other ideal-observer simulations of ensemble coding.This late-stage noise would arguably increase the estimates of the precision of integration: that is, the model would predict higher levels of ensemble perception for a given value of precision in the average emotional expression discrimination task.In the absence of relevant empirical data, especially for differences between autistic and typical children, we opted to include no arbitrary constraints for late-stage noise in our model.Eye-movement data on the other hand provided an upper-bound estimate of the number of faces that each participant integrated when completing each task.Our eye-movement data did not suggest that differences in the way the three groups attended to the stimuli, in particular in the number of different faces scanned across trials.We also investigated within-group individual variability in ensemble perception.This analysis revealed an interesting difference in the development of ensemble perception in autistic and typical children.In the group of autistic children, ensemble perception was closely related to their non-verbal reasoning ability.This relationship was not present in the group of typical children.This finding was supported by the computational modelling results rather than the empirical results in ensemble perception.Computational modelling assessed performance in ensemble emotion discrimination focusing on the amount of information integrated by participants and ruling out differences in baseline emotion discrimination.Our results therefore suggest that ensemble perception per se presents an asymmetric relationship with general perceptual and reasoning abilities in autistic and typical children.Indeed, our findings raise the possibility that ensemble perception might be fundamentally different in autistic and typical children.Ensemble coding in autistic children could be achieved through alternative cognitive strategies, possibly involving some kind of perceptual reasoning over individual emotional expressions.By contrast, in typical children, ensemble perception might involve domain-specific cognitive mechanisms.We also showed that typical children performed worse than adults in all three tasks, presenting worse precision in baseline and average emotion discrimination and worse accuracy in the face-identification task.Our data suggested that abilities for ensemble perception of emotion, as well as the abilities for baseline emotion discrimination and emotional expression identification, are available early in development.These findings are consistent with the findings of Sweeny et al. on ensemble perception of a non-social stimulus, namely size, in younger children.However, our data could not demonstrate developmental improvements as correlations between precision measures in Tasks 1 and 2 or model-based measured of ensemble perception were not significant.Arguably, this might reflect a power issue.Eye-movement data too, showed no systematic correlations with age or performance IQ.Thus, differences in performance between children and adults in the three tasks, as well as individual variability in performance within the two groups of children, were not related to looking differences.Our findings that ensemble perception of emotional expression is, on average, similar in autistic and typical children contrasts with those of Rhodes et al., who reported ensemble coding limitations in autistic individuals for face identity.One possibility is that this discrepancy is due to different mechanisms underlying the extraction of summary statistics for facial identity and emotions, consistent with theoretical proposals for the involvement of different pathways in the processing of invariant aspects of faces, such as identity, and changeable aspects, such as expression.However, we would also argue that the findings of Rhodes et al. warrant replication, especially since the sample of autistic individuals was very small and could not provide enough statistical power for the consideration of within-group variability.Two patterns in our results, which characterized the performance of all three groups, were inconsistent with the original study by Haberman and Whitney.First, we found that precision in ensemble emotion discrimination was worse than precision in baseline emotion discrimination.Haberman and Whitney found no difference between these two conditions for two of their three participants.Second, we found that accuracy in face identification was at above-chance levels for target-distractor emotional distances of 15 and 17.Haberman and Whitney had found that accuracy was at chance for all conditions of their face identification task.These discrepancies between our findings and Haberman and Whitney are likely to reflect a number of methodological differences, which were introduced in our study to develop a child-appropriate version of the original paradigm.Our findings that accuracy in face identification was at above-chance levels suggested that children, adolescents and adults present abilities for ensemble perception, as well as abilities to represent individual items.This pattern is consistent with other studies on ensemble perception.Our results also suggested that autistic children/adolescents had no problems in emotion perception, either in the baseline or the ensemble discrimination tasks or even in the identification of facial expressions.This finding is in line previous studies reporting no differences between autistic and typical children in emotion discrimination and identification tasks.It is possible that our task was simply not sufficiently difficult to detect differences between autistic and typical children in any of the three tasks.However, this is unlikely, given the significant age-related differences between children and adults.Another potential limitation is that our results reflect sampling issues and that ensemble processing abilities would be not as robust in a group of autistic children with poorer baseline emotion discrimination abilities.Nevertheless, the different individual variability profiles of the two groups of children in ensemble perception demonstrate that it is important for future studies on ensemble perception to consider individual differences.Our results also demonstrate the need to refine prominent theories of autistic perception, for example theories suggesting limitations in global processing, the processing of more complex stimuli and, of course, the hypothesis of attenuated prior knowledge.To account for our data, these theories need to accommodate mechanistic accounts for how qualitatively different strategies might give rise to similar overall performance in ensemble perception in typical development and the autism spectrum.Gaining knowledge of the temporal dynamics of ensemble perception would be a valuable way to address this issue.For example, our results suggest that ensemble perception could be less rapid as a process in autistic children, due to its greater reliance on some kind of perceptual reasoning.Our study, and the original study of Haberman and Whitney, obtained responses after the stimuli have remained on screen for 2s, and therefore could not provide reliable measures of reaction times.Studies with time-contingent designs, more demanding stimuli, as well as electrophysiological approaches could be used to assess the rapidity of ensemble perception in typical development and autism.Theories of autistic perception and ensemble perception also need to consider the possibility of efficient compensation for ensemble perception in autism.Developmental and other studies on ensemble perception have argued that its early emergence and ubiquity reflect its fundamental importance in perception and, in the case of social stimuli, in the development of social behaviour and cognition.A number of previous studies have also established that autistic individuals present atypical adaptation to various dimensions of facial stimuli, suggestive of limitations in their abilities to extract norms for faces seen during the recent history of sensory input.Such limitations might give rise to difficulties in ensemble perception, with profound effects in their ability to adapt and respond to social environments.It is possible that these difficulties are compensated in autism through the use of domain-general perceptual reasoning over individually perceived stimuli.If this is the case, adults on the autism spectrum should also show a reliance of abilities for ensemble perception on perceptual reasoning abilities.Finally, it is important to ask whether our findings are specific to ensemble perception of facial attributes or whether they generalise to low-level stimuli.An interesting possibility is that qualitative differences in ensemble perception should manifest in domains where autistic individuals present diminished perceptual adaptation, rather than domains where adaptation is similar to typical development.
Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an ‘ensemble’ emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average.
489
Improved understanding of environment-induced cracking (EIC) of sensitized 5XXX series aluminium alloys
A catalogue of service issues experienced by the U.S. Navy over the last decade or so has resulted from aluminum-magnesium alloys becoming sufficiently ‘sensitized’ during their use to subsequently suffer environmental-induced cracking.This has led to a resurgence of interest and research activity, stimulated by both scientific curiosity and the availability of industry and U.S. Navy funding.A comprehensive overview, now available in a dedicated special issue of the NACE publication, Corrosion , provides a clear and informative assessment of our current understanding of the phenomenon, with well-informed review articles providing a historical overview and overviews on the roles of alloy sensitization , intergranular corrosion , environment-induced cracking and the detection and mitigation of the corrosion-related service issues .It is well established that critical surface defects are a required pre-requisite for intergranular environment-induced cracking initiation during the slow strain rate testing of smooth tensile specimens of commercial Al-Cu-Mg and Al-Zn-Mg series aluminium alloys .Pre-exposure to saline environments ahead of subsequent straining in moist-air is reported to stimulate crack initiation and enhance initial crack propagation rates .This study was prompted by the lack of equivalent information for commercial non-precipitate hardening Al-Mg based alloys containing sufficient magnesium to exhibit/develop sensitization at ambient temperatures.In particular, high-resolution X-ray tomography and SEM fractography have been conducted on interrupted short transverse SSRT of AA 5083-H131 given various thermal and environmental pre-exposures to help shed light on the EIC mechanisms.Testing was performed on 29 mm thick commercial AA5083-H131 plate material with a chemical composition in wt% of: 4.46 Mg, 0.62 Mn, 0.08 Cr, 0.12 Fe and 0.07 Si, balance Al, established using Optical Emission Spectroscopy, a standard technique used in the aluminum industry for melt analysis of solidified samples.Alloy susceptibility/Degree of Sensitization to intergranular corrosion was characterized by nitric acid mass loss testing in accordance with ASTM Standard G67 with the as-received plate material was essentially non-sensitized having a NAMLT value of 8.6 mg/cm2.As reported previously the plate material had typical pancake-shaped grains elongated along the principal rolling direction with an average length of 150 µm, and widths of 35 µm and 80 µm.Short transverse tensile yield stress, ultimate tensile stress and plane-strain fracture-toughness properties were 375 MPa, 260 MPa and 31 M Nm−3/2, respectively .Conventional SSRT, employing a nominal strain rate of 10‐5 /s, was conducted on cylindrical tensile specimens with a 12.7 mm gauge length and 3.20 mm diameter machined in the short-transverse orientation to maximize sensitivity to EIC .SSRT samples were plastically strained in either laboratory air or packed in anhydrous magnesium perchlorate granules, known to generate very dry air .Specimens were tested in the as-received condition and also after they had been subjected to sensitization treatments of 250 h at 80 °C and pre-exposed to a 0.6 M sodium chloride solution at room temperature for times up to 375 h to introduce intergranular corrosion and hydrogen charging.Following pre-exposure tensile specimen gauge lengths were carefully dried and immediately subjected to SSRT.Samples were tested to failure and the fracture surfaces studied using scanning electron microscopy.In a few instances SSRT was interrupted prior to final fracture and un-loaded tensile specimens were then subjected to high-resolution 3D X-ray computed tomography to characterize the crack morphology and local damage promoted in the crack front process-zone during EIC propagation.Fracture surface analysis on all fractured samples was performed on a FEI Quanta SEM and X-ray CT scanning was conducted using a Zeiss Xradia Versa 520 systems operating at 70 kV and using the 4x coupled optical magnification providing a voxel size of3.Additional experimental details are provided elsewhere .SSRT data are summarized in Table 1 and representative engineering stress vs. relative strain curves are provided in Fig. 1.The mechanical response under SSRT conditions for as-received AA5083-H131 material strained in dry air proceeds as expected with an initially linear elastic response followed by yield, extensive elongation with some evidence of serrated flow at high strains and the attainment of UTS before necking.This response provides a prototype response for comparison to the range of sensitized and/or pre-exposed samples investigated here.SEM examination of these failed samples revealed ductile failure by micro-void coalescence with typical small circular dimples.This response is exemplified by the result on Sample #32 as shown in Fig. 1.Significant differences result when test samples were subjected to various sensitization and pre-exposure treatments prior to SSRT in ‘dry’ or ‘humid’ air.These differences may be summarized as:Regardless of pre-exposure to 0.6 M NaCl or alloy sensitization, testing in dry air resulted in essentially equivalent yield and ultimate tensile stress values when appropriate allowance was taken for load-bearing cross-sectional area losses due to intergranular corrosion promoted during pre-exposure.Pre-exposure, however, resulted in ~20% lower strain-to-failure and a few small load drops close to final failure along with slightly reduced fracture stresses, Fig. 1 and Table 1.Without pre-exposure to 0.6 M NaCl the initial stress-strain response for as-received AA5083-H131 during SSRT in laboratory air replicates dry air behaviour, irrespective of sensitization time at 80 °C.However, following pre-exposure to 0.6 M NaCl and the introduction of local stress raisers created through IGC, cracking initiates when the local stress intensity factor, K exceeds a critical minimum, KI-IGSCC.Stress intensity factors for crack initiation have been estimated by inputting the depth of the IGC, that essentially produce sharp starter cracks around the circumference of SSRT samples, and loads from the SSRT data into available K solutions .This cracking produces a divergence from the prototype stress-strain response via an initial gradual load drop, the attainment of UTS that is then followed by a series of sudden load-drops, some of which are major, as seen in Test #’s 4, 7 and 13 in Figs. 1 and 1."The global tensile strains required for KI-IGSCC to be exceeded decrease with pre-exposure to 0.6 M NaCl and/or alloy sensitization, Table 1 and Fig. 1, due to the increased depth of IGC and resulting higher local K's.The inability to promote Type-1 crack initiation in smooth AA5083-H131 tensile test specimens in the absence of suitable stress raisers has been confirmed by concurrent work for applied nominal strain rates down to below 10−7 /s and over a wider range of alloy sensitization levels.Comparison of fracture surfaces generated in dry and laboratory air indicate the depth of IGC formed during pre-expose to 0.6 M NaCl increased with alloy sensitization, increasing from around 70 µm for as-received AA5083-H131 pre-exposed for 375 h, through to ~110 µm for –H131 material sensitized 250 h at 80 °C and then pre-exposed for 250 h, Table 1."As indicated above, such changes in IGC depth will increase the local imposed K's, which have been roughly estimated using conventional solutions, , accepting that the IGC depths are below those known to provide accurate K's.SEM evaluation of SSRT samples fractured in laboratory air reveals that the initial EIC growth, which consistently initiates from IGC sites, occurs via the classic mode of intergranular stress corrosion cracking well known to be promoted in sensitized Al-Mg based alloys , herein referred to as Type-1 cracking, Fig. 2.For Type-1 cracking, despite grain boundaries being visible, the smooth fracture surfaces display little topography between the grains.Type-1 cracking never initiated during SSRT conducted in dry air, irrespective of alloy sensitization or the IGC developed during pre-exposure.The IGC depth generated in laboratory air after pre-exposure to 0.6 M NaCl increased with alloy sensitization and pre-exposure time, Table 1.Fracture surfaces were examined in more detail, Fig. 2, following the recognition that pre-exposure to a saline solution prior to SSRT in laboratory air consistently led to sudden and often substantial load-drops after the UTS, typically immediately followed by short periods of re-loading prior to the next load-drop and resulting in low final fracture stresses.The present work has also revealed that during SSRT, Type-1 cracking is always superseded by another form of EIC, which we do not think has been identified previously, herein referred to as Type-2 cracking.Type-2 cracking is associated with these sudden load drops before being superseded by Type 3 fast-fracture promoted during final over-load failure by MVC, typically manifested as a shear failure at an angle of approximately 45° to the otherwise flat plane of the fracture surface, Fig. 2c).The areas of final failure may be restricted to less than 10% of the test specimens cross-sectional area, consistent with recorded low fracture stresses, provided in Table 1 and depicted by ‘X′ symbols shown in Fig. 1.Type-2 crack fracture surfaces can at a cursory glance appear to resemble those for Type-1 cracks, with both displaying large planar areas and demarcation along grain boundaries.However, closer inspection of the Type-2 fracture surface reveals that the large nominally flat planar areas are always outlined by a sharp ridge-step of ductile shear, and the flat planar regions include dimples somewhat resembling ductile fracture other than the dimples are much larger, non-circular and have shallow flat bases, Fig. 2, and.The bases of these flat ‘dimples’ have a brittle cleavage-like appearance, as shown in Figs. 2 and 2, are believed to be associated with the fracture of coarse intermetallic particles with the nominal composition Al6, along with some other minor additions , and Mg2Si, the expected secondary phases present in AA5083.High-resolution X-ray Computed Tomography of Interrupted SSRT has been conducted on sensitized AA5083-H131 tensile samples pre-exposed to 0.6 M NaCl subjected to SSRT in laboratory air, where the plastic straining was terminated immediately after one or after several significant ‘load-drops’ after the UTS,).Key images are provided in Figs. 3 and 4.The X-ray CT data not only allows investigation of the fracture surface topography in great detail, it also has been used to augment information from the SEM characterization of fracture surfaces to overcome issues that potentially arise when only using the high depth of field SEM images, as these can cause fracture surfaces to appear misleadingly flat and thereby allow key information on fracture surface topography to be overlooked, Fig. 4.The tomography data has also been used to non-destructively explore sub-surface features via ‘virtual cross sections’.The difference in electron density difference between the aluminium matrix and the second phase particles means that they are easily distinguished.The X-ray CT analysis has also enabled the indisputable demonstration that the numerous regions of ‘isolated damage’ associated with Type-2 cracking, which are absent for Type-1 cracking), are ‘fully isolated’ regions completely unconnected to an existing crack by a ‘hidden’ 3D path.Comparing the SEM images shown in Figs. 2 and 2 of Sample #7 taken to failure with a corresponding surface rendering of the reconstructed X-ray CT data for an interrupted SSRT, Fig. 3 depict similarities for Type-1 and -2 crack growth regions.However, the views shown in Fig. 4 for another interrupted SSRT, and the visualization of the virtual cross sections, highlights significant topography and height differences between all the multiple planes and plateaus in the Type-2 cracking regions of the fracture surface.The tomography results provide important information in addition to the SEM fractography on these planar regions, highlighting the significant differences in the Type-1 and Type-2 cracking.It is clear that while Type-2 cracking is highly topographical it remains delineated by second phase particle clusters which appear to be associated with grain boundaries.The ability to explore the virtual fracture surfaces of the interrupted SSRT in 3D reveals the extreme topography and the limited planar continuity to the fracture surface.This provides critical information regarding how the crack must have propagated to create the crack morphology observed in the fractured samples examined in the SEM.In addition, the 3D tomography also provides evidence regarding the driving force required to promote crack divergence during growth at any particular point during this stage of cracking.Here X-ray CT reveals the rather unusual characteristic of regions of shear failure that link these various flat regions as the crack propagates through the material.Closer inspection reveals that almost all of the flat regions are completely surrounded by the regions of shear and as such sit isolated above or below the surrounding fracture surface."Local stress intensity factors generated at IGC sites have been estimated by inputting maximum IGC depths generated during pre-exposure and maximum stresses generated during SSRT into empirical relationships provided in the literature for elliptical cracks with the appropriate c/a ratio's .Estimated plane-strain stress intensity factors of 3.8–4.2 MNm−3/2 for Type-1 crack initiation for test samples #4 and #7, Table 1, are consistent with experimentally observed threshold stress intensity factors, KI-SCC, for ‘classic’ intergranular stress corrosion cracking in sensitized Al-Mg based alloys .Similar calculations for the Type-1-to-Type-2 crack transition made by inputting maximum Type-1 crack depths and shape parameters, yielded K values of 12–15 MNm−3/2, Table 1.Type-1 cracking corresponds to ‘classic’ IGSCC as reported elsewhere in the literature .It is characterized by a very smooth and flat fracture surfaces, as depicted in Figs. 2 and 3.By visual inspection and/or optical analysis these regions appear bright and shiny.This fracture mode is constrained to grain boundaries and deviates very little from the most direct path through the material.The schematic of Type-1 cracking in Fig. 5 shows that the fracture path outlines grains) and sometimes exhibits a modest step height difference.As a consequence of the clusters of second phase particles typically residing on grain boundaries, the propagating Type-1 crack will propagate through some such regions and they will cover small regions on the Type-1 fracture surface in Fig. 2).The mechanisms responsible for Type-1 cracking are still very much under debate and are not considered here.Type-2 cracking is characterized by the fracture surface shown in Fig. 2 and shown diagrammatically in Fig. 5.The appearance of the fracture surface has some characteristic features which in some ways make it similar to a ductile dimpled fracture surface and also Type-1 fracture surface.The size of the isolated ‘patches’ are similar to that of the grain size as delineated on the Type-1 fracture surface.Comparing the real and virtual fracture surfaces in Fig. 3 it is more difficult to determine the Type-2 regions from the X-ray CT data when viewed top down as the finer features of the fracture surface are not resolved in the X-ray CT data.The similarity of Type-2 fracture surfaces to ductile dimpling is only present when the scale of the images is not taken into account.The Type-2 fracture surfaces are dimpled with large irregularly shaped flat-bottomed dimples, often over 10 µm across displaying a large size distribution that are terminated by particles, easily resolvable in the SEM, & that differ significantly from the typically circular and deep dimples associated with the MVC of ductile fracture,).Two types of predominant particles have been found to reside at the base of these Type-2 cracking flat-bottomed dimples: Mg2Si particles that appear dark in BSE images, in Fig. 5 and Al6 which appear bright in Fig. 5 in BSE images).It is worth noting that both of these particles are difficult to observe when imaging in SE mode in the SEM due to lack of contrast.It is difficult to appreciate from the SEM images but the other significant characteristic of Type-2 cracking is major topography of the fracture surface.This can be best appreciated from a virtual cross section of the fracture surface from the X-ray CT data.The 3D virtual fracture surface shown in Fig. 4 illustrates that the entire Type-2 fracture surface is composed of relatively flat isolated patches of large irregular dimples at very different heights, which can easily be over 100 µm in height in these samples.The process that we propose for the propagation of Type-2 fracture consists of several stages.First it is dependent on a microstructure containing second phase particles which in AA5083 are predominantly Mg2Si and Al6.The specific composition and the nature of the interfaces are important and are discussed later.This microstructure is charged with hydrogen which can come from a pre-exposure step and/or the test environment.The internal hydrogen is focussed at the interfaces of the second phase particles but will also reside in multiple different locations including grain boundaries, dislocations etc.).We propose that damage is focussed locally when these ‘charged interfaces’ are subjected to an applied stress during SSRT and after a sufficient stress is applied these ‘charged and loaded interfaces’) plastically strain and locally fracture), sometimes the second phase particles themselves break apart, although this may have occurred during an alloy manufacturing processing step.Focussing of the damage at the second phase particles is due to a combination of the presence of hydrogen and local strain.This ‘isolated damage initiation’ is an additive effect of the local hydrogen and the strain and the fact that both are focussed at the second phase particle interfaces.These ‘isolated damage’ regions grow internally).‘Isolated damage’ regions that are of a larger size and in a favourable position, closer to and situated directly ahead of the propagating crack, are then connected to the propagating crack front through overload and MVC of the intervening metal ligaments).This can also be easily observed from the SE SEM of the fracture surface but only when the sample is tilted such that the near vertical regions of MVC which link the ‘isolated damage’ regions can be observed).We observe) that these regions of ‘isolated damage’ only occur in close proximity to the Type-2 cracking regions and are not observed in the material adjacent to regions of Type-1 cracking.This leads us to conclude that these ‘isolated damage’ regions are associated explicitly with a propagating crack front and furthermore a crack front propagating in a water containing environment, as discussed later.Type-2 cracking is not restricted to a single plane of interconnecting grain boundaries like Type-1 cracking.Instead the crack propagates forward on planes and plateaus of limited extent in small bursts.This sequence of patchwork events eventually creates a continuous fracture path through the material.The load-drops, like those shown in Fig. 1, are followed by a reloading which is resultant on the crack blunting on passing through a cluster or series clusters of second phase particles, eventually encountering the remaining uncracked ductile metal ligaments.These isolated ‘patches’ are somewhat dispersed for any given single plane, however when ‘patches’ on nearby planes are included, their projection onto a single 2D plane creates a continuous fabric of patches and 5) and the difference between the distribution of second phase clusters in a single plane compared to the number of overlapping clusters when nearby planes are considered, as seen in the 3D view in Fig. 7.Considering a single short transverse plane through the material) it is clear that the second phase particles do not form a continuous coverage and a fracture path propagating along this plane would encounter the ductile matrix after only propagating a short distance, i.e. through one cluster.A side view of the arrangement of the second phase particles) also shows that the opportunity to move out of plane also offers limited options.However, when the entire 3D environment ahead of the propagating crack front is considered there are many nearby clusters) especially considering the length of crack front."As is clear from the fracture surfaces and 5) Type-2 fracture is not restricted to the same plane across it's breadth or length.Instead, the crack finds a 3D path though the microstructure from cluster to cluster."Interestingly, the associated dimensions for plane-strain process-zone sizes associated with K's of 4 and 12–14 MNm−3/2 are ~40 and 365–450 µm, respectively.This is consistent with the anticipated minimum volumes of material requiring plastic strain to enable, Type-1 and Type-2 crack propagation, with the latter encompassing several parallel planes on either side of a main crack as shown in Fig. 8.Type-3 cracking refers to the final overload failure and is comprised of shear failure via micro-void coalescence.The fracture surface is characterized by small circular dimples in Fig. 5.Particles associated with these dimples are hard to observe as the micro-voids are typically <1 µm across).In the pre-exposed and sensitized samples studied here small patches of isolated Type-2 cracking were observed in Fig. 5.The MVC failure regions are the only parts of the fracture surface to deviate away from being essentially perpendicular to the applied load and are inclined at a ~45° angle to the applied load.Recent findings based on a non-linear finite element approach by Bouiadjra et al. indicate the introduction of an isolated micro-crack in the process-zones ahead of a main crack can significantly modify the shape of the resultant plastically strained region, extending it parallel to the main crack, while reducing it in the normal direction.In the present work, this behaviour will shift Type-1-to-Type-2 crack transitions during SSRT to deeper cracks.Pre-exposure to a saline solution ahead of straining in laboratory air is proposed to provide local microstructural sites associated with weakened regions for some grain boundary second phase particles, which subsequently upon straining may become ‘isolated damage’ regions.The observation that increasing sensitization exacerbates this effect is indicative of a role for the Mg-rich phases generated in grain boundaries in association with Al6 particles during sensitization .The high local concentration of Mg resulting from sensitization provides the possibility that Type-2 cracking may involve the local formation of an intermediate phase, such as magnesium hydride, although this speculation awaits verification through further detailed analyses.We propose that the periods of reloading that interrupt the sequence of load drops in Fig. 1 are related to crack-tip sharpening that is facilitated by testing in ‘humid air’ and this is critical in order to sustain the propagation of Type-2 cracking.It is also proposed that during periods of rising load between load drops that sharpening of the crack front takes place based on the availability of local sources of hydrogen.In contrast, the inability to sustain significant amounts of Type-2 cracking in ‘dry’ air is considered to be associated with the inability of crack fronts to sufficiently re-sharpen after a sudden load drop, due to the lack of sufficient levels of local hydrogen.This explains why only very occasional isolated patches of Type-2 cracking are generated by straining in dry air) as such limited patches of Type-2 cracking are likely to be promoted by local microstructural stress raisers.In dry air there is no mechanism to propagate the Type-2 cracking but local microstructural features can act as a local stress raiser to propagate the crack a short distance but quickly arrests after crack blunting with no mechanism to continue propagation.The role of the moisture in the environment is to re-sharpen the crack front and create the local stress state required to continue Type-2 propagation.As observed presently, significant load drops and a continuous extent of Type-2 cracking is only seen in tests where water is present in the environment.It is important to highlight the fact that second phase particles do not offer a favourable failure route during Type-1 and Type-3 failure and are not considered to impose a limit on the mechanical or EIC resistance of this class of alloys."However the combination of the embrittling effects of hydrogen and large local K's exploit these sites as preferential failure paths in Type-2 failure.The enhancement of this effect with increasing levels of sensitization also suggests the importance of grain boundary Mg-rich phases in this process, the role of which has not been confirmed but is likely to play a role in either generating hydrogen at the crack tip , re-sharpening of the crack tip, interaction with hydrogen to weaken grain boundaries and interfaces, etc. .Based on the detailed tomography and SEM conducted, we propose Type-2 cracking requires:"Sufficiently large K's to generate a process zone of sufficient size ahead of the crack front to encompass several adjacent parallel planes of grain boundaries,",Hydrogen concentrated near second phase particles, Mg2Si and Mg-rich phase precipitation both alone and associated with these other particles leading to embrittlement of interfaces between the particles and the matrix following sufficient straining, de-cohesion and the growth of shallow flat-bottomed dimples and,a local source of moisture is required for crack-tip ‘sharpening’ to enable the multiple load-drops shown in Fig. 1.Having identified Type-2 cracking in our X-ray CT studies and with the benefit of hindsight, it is likely that Type-2 cracking was also present in some of our previous work in Ref. ), as well as in work of others,) in Ref. ).Several factors have contributed to Type-2 environment-induced cracking remaining unrecognized until now, the most significant being the limited situations where the specific conditions needed for significant Type-2 crack growth are satisfied.Although over the years several researchers have observed single or multiple sudden load-drops just prior to the final failure of AA5083 tensile specimens subjected to SSRT in saline environments , none have discussed these features.In these studies it is not surprising the researchers failed to identify the new mode of environment-induced cracking reported here, as it would have only existed over a narrow transition zone between an outer ring region of classic IGSCC and a central region of fast-fracture formed during test specimens final failure by over-load.Implications of this include the inability to reconcile the final fracture stress with the percentage of the final fracture attributed to MVC shear failure.In addition, while failure analysis of in-service structures have predominantly relied on SEM since the fracture surfaces offer the best route to understanding failure, the inability to correctly identify these different regions could lead to inaccurate conclusions on failure.Mechanistically the role of hydrogen at the crack tip and within the process zone awaits a quantitative explanation.We believe that Type-2 cracking relies on a sufficient amount of hydrogen within the process zone and favour what might be called a strain-generated hydrogen embrittlement process.The visualization of confirmed isolated damage ahead of the crack tip that is enhanced through increased degree of sensitization and pre-exposure adds further support to hydrogen as a prerequisite for this mechanism of EIC.
Two mechanistically different modes of EIC have been identified using high-resolution X-ray computed tomography and scanning electron microscopy (SEM) in sensitized AA5083-H131 that had been pre-exposed to 0.6 M NaCl prior to interrupted slow strain rate testing (SSRT) in the short transverse direction while exposed to laboratory air (50% RH). One mode, shown to propagate when local stress intensity factors are in the range of 4–12 MNm‐3/2, is the well-known ‘classic’ form of intergranular stress corrosion cracking, Type-1 cracking, which would not initiate and propagate in dry air, irrespective of pre-exposure or sensitization. The second mode of cracking, identified presently as Type-2 cracking, is associated with sudden load-drops occurring after the UTS during SSRT. Type-2 cracking propagates at higher local stress intensity factors (above 12–15 MNm-3/2) with significantly higher average growth rates than Type-1 cracking, involves the sudden simultaneous mechanical linkage of multiple fully-isolated regions of damage, pre-determined during pre-exposure to NaCl solution and generated during straining in laboratory air. Differences in the extent of Type-2 cracking for pre-exposed samples tested in ‘dry’ air compared to laboratory air (50% RH) were marked, with that in dry air being limited to isolated patches. High-resolution 3D tomography and detailed SEM has been used to distinguish these two mechanistically different modes of EIC. Implications for a role of hydrogen embrittlement during EIC are discussed.
490
Collating and validating indigenous and local knowledge to apply multiple knowledge systems to an environmental challenge: A case-study of pollinators in India
There is an important role for indigenous and local knowledge in a Multiple Evidence Base to inform decisions about the use of biodiversity and its management.The Convention of Biological Diversity refers to the knowledge of indigenous and local communities and more recently the Nagoya Protocol notes ‘the importance of traditional knowledge for the conservation of biological diversity and the sustainable use of its components, and for the sustainable livelihoods of these communities’.Policy makers increasingly seek to ensure that policy regulating environmental management is evidence based and also recognize that the evidence may arise from parallel knowledge systems.While there are materials available for collating Indigenous and Local Knowledge and practices for specific challenges, methods for integrating indigenous or local knowledge with the scientific evidence remain debated.There are instances where local knowledge has been successfully gathered and incorporated into decision making with the agreement of the local community but there is also concern about the validity and utility of local knowledge.As a counter argument it has been pointed out that the process of validating indigenous or local knowledge with western scientific knowledge might be superfluous or misunderstands the epistemology of indigenous knowledge systems, and that poor tools may serve to alienate people further from participation.Although epistemological approaches in parallel knowledge systems may differ there is a need for a transparent tool to verify and validate evidence, one that does not alienate participants but which allows those co-creating policy to be confident that, within its own cultural framework, the knowledge is both valid and agreed.Sutherland et al. outline a 3-stage process for collating and integrating parallel knowledge systems to support integrated analysis for decision-making.The first of these stages is to recognize that there are fundamentally different types of knowledge, each associated with different needs for different stakeholder groups.The second stage is to collate and validate indigenous and local knowledge and the third stage is to partly combine it with available information from conventional scientific knowledge, using formal consensus methods such as the Delphi technique.We developed stage two of this methodology and applied it to a case where indigenous and local knowledge could contribute substantially and may indeed be the principle component of the available knowledge base.Our aim was to collate and validate local knowledge in preparation for integration with scientific knowledge, for the purpose of producing a Multiple Evidence Base to develop conservation strategies for pollinators.There is a growing acknowledgement that pollinator decline is a global phenomenon and evidence that declining pollinator diversity and abundance can affect food security although uncertainty remains over the extent of the impact.This concern extends to India where little is known about pollinator population trends and there are no published empirical data explicitly linking a change in crop yields to pollinator abundance.This is worth underlining as it has been suggested that decisions, even at national policy level, have been made on the basis of scant evidence.In India there are no validated scientific studies to elucidate recent trends in pollinator diversity or abundance.This presents researchers with a conundrum – how to determine whether change has already taken place in order to determine the direction of trends in pollinator abundance/diversity and to establish whether they are linked to changes in crop yield.Through a recently completed project an important group of stakeholders were identified as smallholder subsistence farmers, including tribal people, who have personal and procedural knowledge of crop production.These subsistence farmers meet a large part of their nutritional needs through a variety of pollinator dependent vegetable crops.The project included a participatory scheme, where local communities were engaged in pollinator monitoring efforts, thereby developing citizen science and incorporating valuable capacity building components, as exemplified by Community Based Monitoring and Information Systems.During the project the partners and stakeholders came to a consensual understanding of critical goals that addressed overlapping concerns.The farmers expressed a need to be aware of potential negative drivers of vegetable yields and a desire for a suite of practicable interventions to protect or increase those yields.Scientists hypothesised that pollinator populations are declining and that this may be an important driver of changes in vegetable yields.Pollinator-friendly management practices may help to increase yields but the base-line information to develop this is missing.The exercise was designed to address the shared aims of the stakeholders.At a larger-scale, this information will also contribute to a) our understanding of whether there could be a ‘pollinator crisis’ in India, as found in other countries; b) the global evidence-base on the status of pollinators.Two clear knowledge gaps emerged from dialogue: 1) there was a lack of information on the diversity of crops that were grown and the trends in productivity,in the study areas; 2) there was also a lack of information of pollinator identity and trends in abundance and diversity.To further understand whether there is a ‘pollinator crisis’ in India, it is important to know which pollinators are important for crop pollination and whether any changes in crop productivity are linked to changes in pollinator diversity or abundance.This paper focuses on collating traditional and local knowledge that can be validated in a meaningful and respectful way.Validity is interpreted as the extent to which observations reflect the phenomena or variables we are interested in.The process of validation involves verification and evaluation.Here we present a novel method using consensual validation by peer groups of local knowledge holders, whereby knowledge is validated within its own cultural framework and carried out by individuals with the same mental model.We suggest this is loosely analogous to the peer review process carried out by scientists to validate scientific data, thus standardising the quality of validation between farmers and scientists.It is in contrast to other methods where the traditional or local knowledge is presented as an environmental report and validated in technical reviews or directly validated against scientific data.The aim of the knowledge gathering exercise was to establish whether farmer participants considered that the yields of pollinator dependent crops have changed in the last 10–20 years, whether pollinator abundance and diversity has changed over the same period via factual observations and then give their assessment of whether these phenomena are linked.A secondary aim was to identify possible mechanisms for any observed changes and potential interventions to conserve or restore crop yields and/or pollinator populations by asking farmers to make inferences based on their knowledge.We differentiate between factual observations and inferences; inferences are inferred mechanisms, causal links or theories, as distinct from factual observations.These can lead to hypotheses testable using experimental scientific approaches.The study sites were located in the East Indian state of Orissa and the study carried out in February 2014.The study sites were classified into three types representing different levels of farming intensity based on chemical inputs, vegetation cover, land cover and cropping intensity as described in: 1) an area of high intensification with large crop fields, low natural vegetation cover and relatively high chemical inputs; 2) an area of low intensification with small fields, high cover of natural vegetation and relatively low chemical inputs and 3) an area of intermediate intensification according to the same criteria.80 farmers operating on the boundary of the Darwin Initiative project were invited to participate by local ‘rural advisors’ who knew them well; the 50 farmers who took part had not received training in pollinator identification and were not directly associated with project activities.Discussion took place within each study area in three randomly assembled groups of between 5 and 7 individuals.The groups were interviewed concurrently, each working with a different researcher in a separate break out space.In total, nine groups of farmers engaged in the exercise.At the session start, the purpose, process and expected output were explained to participants, who verbally gave consent.Participation was voluntary.Conversations were structured around the questions shown in Table 1 and took place in the local language and dialect.Facilitators encouraged participants to expand on the questions and allowed additional discussion.Detailed notes of the discussions were scribed.Before discussing trends in pollinator abundance or diversity it was important to confirm that farmers could identify insect pollinators and had a common understanding of pollinator identity; this was confirmed by using a pictorial guide and quiz,.Farmers were asked ‘what is this?’,and ‘can you name it?’,Positive recognition was recorded if the name was provided by more than one farmer.When each farming group identified an insect we asked “Which crops do you see these insects visiting?,.This information was discussed and validated within study areas.Discussion lasted for between one and two hours, after which a number of statements were derived by the researchers; example statements are shown in Table 2."The participants regrouped and were asked to review each group's statements.All farmers had the opportunity to review the statements generated by other groups within their own study area.Statements were read out and farmers had a brief discussion among themselves following which they were asked to either accept, reject or modify the statement — thereby providing internal validation and consensus for the statements made.In some cases there was a discussion about a particular point but in all cases agreement was reached through discussion.One set of agreed statements was produced from each of the three groups within each of the three zones.Farmers were not asked to verify statements from the six groups from the other two study areas.Differences in the number of crops grown in the study areas were tested using ANOVA.Other responses were collated and are here represented graphically.To interpret the answers to question 8 ‘which crops do you see these pollinators visiting?’,we constructed a network showing the linkages between the crop plants and the pollinators based on the anecdotal evidence from farmers.Mutualistic networks are frequently used to represent interactions between mutually-benefited taxa, however, the network we constructed represented plant pollinator interactions based on local understanding.The information from the three study areas was pooled to form a single network describing plant-pollinator interactions based on farmer perceptions.In our network the interaction strength indicates the number of farmer groups that cited an interaction.We assumed that the more farmers that cited an interaction, the more confidence we could have that this interaction exists, therefore the line width can be seen to represent a proxy for confidence in the information.However, we acknowledge that this information is essentially biased because the chances of a farmer observing an interaction is influenced by detection probability based on size, insect rarity and visitation frequency.Therefore our network only provides information on positive interactions and it is not possible to draw inferences about lack of connections.A further bias is that farmers may misidentify closely related bee species; to minimise this effect we pooled data at genus level for all species except the two Apis species Apis dorsata and Apis cerana as these were readily distinguished by participants.Network analysis was performed by using “R” statistical software version 3.0.1 with “bipartite” and “SNA” used to construct the network and “ggplot2” and “igraph” and packages used to visualise data.Farmers reported that collectively they grew 41 crops.The number of crops grown did not vary between study areas.Each of the nine groups reported that they grew between 17 and 29 crops between them.Three crops were grown ubiquitously: brinjal, ladies finger and ridge gourd.Other commonly grown crops included Curcubits, legumes as well as chilli, maize, mustard, radish, rice and onion.Three crops were only grown by farmers in one group.Only three of the crops mentioned by farmers are known not to be dependent on pollination services at all, these were banana, maize and rice.Although there was general consensus between groups on the direction of change within each zone, trends of change in crop yield differed between zones.In the extensive farming areas some crop yields were reported to have increased significantly, particularly ridge gourd, tomato, brinjal and rice, although others were reported to have declined including broad bean, field bean, maize, mung, pulse and sweet potato.In the intermediate and intensive farming areas farmers consistently reported a trend towards lower yields.Farmers in the intensive zone only assigned values to two crops, rice and brinjal but agreed a broad statement that all other crop yields had fallen.Farmers in the intermediate areas were more detailed, agreeing that brinjal, chilli, ladies finger, pumpkin and cowpea had all declined.The farmers drew their information from notes on yields that they kept in farm diaries which is a common practice.Where crop yield had increased farmers cited new products, new training and solutions to water management as important factors.The declines in crop yield were attributed to overuse of pesticides, declining soil fertility, increased pest damage, climate change, pollinator loss, increased use of fertilizer and lack of crop specific expertise.The large increases in yield in ridge gourd and tomato were attributed to improved plant quality and hybrid seeds respectively.The most frequently recognised pollinators were Amegilla spp., Apis dorsata and an example of a potentially pollinating butterfly represented by the peacock pansy.Apis cerana, one species of Ceratina sp. and Xylocopa sp. were recognised by seven of the nine groups as was the lime butterfly, the larvae of which are a serious pest.The least well-recognised insect was the non-native Apis mellifera.Farmers were then asked if they thought that the insects played any role in crop production.Apis dorsata, Apis cerana and Xylocopa spp. were identified as pollinators by all groups that recognised them.However Amegilla spp., Ceratina spp. 2, Apis florea, Megachilidae and hummingbird hawk moths were inaccurately identified as pests by at least one of the groups.When asked how they gained their identification skills and knowledge of insect behaviour farmers answered that it was gained from personal observation, knowledge passed down by elders and text books.The anecdotal visitation network is relatively simple, showing five bee taxa visiting 17 crops.Apis dorsata was the best connected species, visiting 15 crops, Lasioglossum spp.were the least well connected, reported as visiting only bitter gourd and pumpkin.Brinjal was known to be visited by most of the bees and thus best connected.Spiny gourd was least connected and only visited by Apis dorsata.Pollination was understood to be the process of moving pollen from male to female flowers or ‘pollen exchange’.All famers agreed that all crops need pollination; one group emphasised that, even if the male and female flowers were on the same plant, pollination was still necessary.No additional species of pollinating insects were voluntarily suggested by the participants.Eight groups of farmers suggested that pollination was necessary to increase yield while two suggested pollination was important for quality.One group underlined the importance of pollination for brinjal yield and another group answered that crops need pollinators because ‘God’ has designed it so.None of the validating farmers disagreed with this last statement.Farmers gained their understanding of the process of pollination by observing the increased yield or fruit quality after pollination or by observing the relationship between visitation and yield, from ‘books’, formal training and parents.The crops identified as needing pollination most were: brinjal, pointed gourd, ridge gourd, spiny gourd, ladies finger, cucumber, mustard, sunflower, and pumpkin.Pollinators were ranked in importance differently in each system."Farmers based this assessment on their observation of the number of bees seen flying, observation of the behaviour of bees they saw, general ‘observation’; one group assigned the insect's importance as a pollinator according to the amount of honey produced.Farmers were then asked whether the abundance of pollinator populations had changed in recent years.Apis cerana in particular was identified as having declined dramatically in all three farming systems.The only increase was in Apis dorsata in the extensive zone, a trend that was independently suggested by two groups and was validated by all farmers from that zone.Farmers were asked “what do you think has caused these changes in pollinator abundance?,.Eleven drivers were suggested, the most frequently cited being pesticide use, although this was qualified in the intensively farmed zone where farmers suggested that it was the number of different pesticides that were used that caused problems rather than quantity in itself.In the extensive area farmers suggested that the social bees were able to recover and would be seen in the fields a few days after pesticide application.When asked how they came to these conclusions farmers cited observations that included dead and dying bees following pesticide application and general observation of patterns of bee activity and abundance.In the extensive area farmers agreed that weather had changed in the last few years, with cyclones increasingly taking place in the flowering season which reduced bee food supply and killed crop plants.When asked if it would be useful to have more pollinators all farmers responded ‘yes’.The farmers were then asked to suggest ways to increase pollinators; the interventions they suggested focused upon pesticide reduction, ranging from ‘go organic’ to ‘use selective pest control’ and use ‘insect predators rather than pesticides’.Other interventions were focussed on habitat manipulation and also on importing pollinators by using bee boxes.The questions then focused on the use of pesticides and the farmers were asked whether pesticides affected pollinators.All farmers responded in the affirmative.They were then asked how pesticides affect pollinators and were then asked to say how they acquired that knowledge.Farmers were clear that pesticides either killed pollinators directly or indirectly by disrupting their physiology; however, some more subtle variations emerged.Apis dorsata and Apis cerana were reported to recover after pesticide applications and, of these two, A. dorsata was considered the most resilient.Farmers gained knowledge by observing bee death and reported witnessing them flying away from pesticide sprays.The aim of developing this method was to enable researchers to collate knowledge from indigenous and local knowledge systems to determine evidence-based management strategies in which local communities have their knowledge included, and in which they consequently have a voice.To test this method, we selected a case-study, that was 1) an example of a system about which little information had been collected by scientists and for which local knowledge was likely to provide the principle evidence; 2) an urgent issue of both local and global concern.Underpinning this approach is the assumption that the process collects factual information.This is distinct from work which attempts to represent whole knowledge systems.It is widely acknowledged that it is important that the evidence collated is validated by peers, as each knowledge system requires appropriate validation that is aligned with its own values and for this we opted to validate the acquired information from within the community using a consensual ‘peer-review’ approach.In the evidence gathering process we used metrics that could be construed as being rooted in a western scientific framework, such as the use of percentages, and linear cause and effect relationships.However, the aim was to arrive at a set of potentially different but ultimately comparable conclusions from a range of stakeholders and we were keen to capture evidence so that it can be used to facilitate the harmonisation of knowledge further upstream in the management/policy development process.Although no difficulties were encountered when asking about proportions or abundance in our case study, we acknowledge that there is a strong need to co-develop metrics between stakeholders and that this would increase the potential for equitable representation of stakeholder knowledge in management and policy development.Our aim was to gather evidence that would not only stand independently but also be ready for integration with evidence from other knowledge systems.There has been a debate over participatory approaches to research and authors raise concerns that even participatory methods can be exclusive and only give the illusion of participation.However, others argue that this can be avoided by careful construction of the participatory process.This should be approached by developing dialogues ‘in a problem solving a process involving negotiation, deliberation, knowledge generation’.It is within this kind of framework that method we suggest would work optimally.The data collated provides the only information available on local crop yields and an indication of pollinator trends in Orissa.As there is no scientific information available on these issues from this region, it is a useful starting point for participatory research with famers.The overall message supported current understanding from global analyses that pollinators are declining and suggests that there have been substantial declines in the last 10–25 years for some bee species which farmers reported as visitors of pollinator-dependent crop flowers.This is a warning sign that there is an issue which needs urgent attention.Specifically, the evidence collated addressed the knowledge gaps identified in the participatory process adopted by the Darwin Initiative project.Declining yields were observed by farmers for insect-pollinated staple crops which were important in local diets, such as beans, pulses, brinjal and ladies finger.These crops are known to provide important micronutrients,.The extent to which pollinator imitation in these crops would actually deprive people of important nutrients from their diets would depend on exactly what they eat, and would require empirical analysis, as conducted by.It was suggested by the participants that curcubits, brinjal, ladies finger, mustard and sunflower were all dependent on pollination to maximise yield, something that is reflected in the literature.It is known that brinjal, mustard, ladies finger and sunflower all have a modest requirement for insect pollination and yields will increase by 10 to < 40% with insect pollination.Cucumber is more dependent — lack of insect pollinators can reduce yields 40 to < 90%; for other curcubits, insect pollination is essential — production will be reduced by ≥ 90% without animal pollinators.Five pollinating taxa were identified as having changed in abundance in the last 10–25 years: Apis cerana, Apis dorsata, Apis florea, Amegilla spp. and Xylocopa spp.With the exception of Apis dorsata, which was reported to have increased in the extensively farmed areas, all bee species were estimated to have declined in abundance.Unlike the other species Apis dorsata is a migratory species which may respond to wider level landscape changes.In some cases, the decline in abundance was reported to be dramatic.We used a network to visualise the pollinator visitation information provided by farmers.The majority of visits were ascribed to Apis dorsata, Apis cerana, Xylocopa spp. followed by Amegilla spp. — suggesting that the declines in these species could have a significant impact on food security.Ceratina spp. and Lasioglossum spp. were also mentioned but connected with only two and three of the crops respectively.The crops that were identified as being visited by the greatest diversity of insects were brinjal, pumpkin, ladies finger and mustard.Although visitation does not confirm that pollination is taking place it is suggestive of it and the network provides a basis for further work.In other studies, concerns have been raised that non-experts are unable to provide good information due to poor ability to identify bees.In our study the larger bees were well recognised by the majority of the farmers.However, the smaller honey bee Apis florea and the solitary bee species belonging to the genus Lasioglossum and the family Megachilidae were less well recognised.In general it is has been observed that smaller species are less likely to be noticed or identified even by relatively experienced people).Bees are not generally well recognised without training which means that more confidence can be placed on information relating to common species.Farmers used inferred knowledge to provide informed opinions as to why crop yield had changed but did not link crop yield to pollinator visitation.Only one group suggested that a lack of insect pollinators was driving crop yield losses, despite all groups later showing enthusiasm for encouraging pollinators.Nevertheless, participants understood the process of pollination, and although farmers were less sure of the role of specific insects in crop production, the larger bees were all identified as pollinators by the groups that commented.The smaller solitary bees such as Lasioglossum and Megachilidae, along with hoverflies and hawkmoths, were less frequently recognised as pollinators.We suspect that this could represent a detection bias in the knowledge base.Furthermore, some of the smaller species such as Apis florea and even Amegilla spp. were identified as crop pests by some of the participants.The effectiveness of the majority of pollinating species has been poorly studied in science and, in many studies, bees other than honeybees, bumblebees or carpenter bees are classed together despite having diverse life-histories and physiology.For example in, Lasioglossum species are included in a category called ‘small and stripy’.There has been much discussion in the scientific literature about the impact of pesticides on pollinators in both scientific literature and in the public domain.The farmers in all study zones considered that pesticides had a negative impact on pollinators, using observations from their fields to support this assertion.One group identified a specific pyrethroid as having a great impact on bees, also confirmed by scientific research.Drawing on their experience famers suggested possible interventions to conserve or restore pollinator populations which included reducing pesticide use, managing natural pest predators, conserving or restoring diverse natural habitats and introducing bee boxes.The approaches for interventions suggested by the farming community are echoed by the FAO.Our findings highlight the importance for maintaining diverse non-crop habitats in agricultural landscapes for improved pollinator health.Other authors also underscore the need for researchers, policy makers and farmers to collaborate to mainstream pollinator conservation and management.The suggestions made by the farming community in the present study are useful; not only are they practical and testable but show a willing engagement on behalf of the farming community and can be considered a good basis for participatory research, indicating scope for co-producing management guidelines for pollinators.The evidence collected suggests that pollinator populations are threatened in the study area, particularly in intensively farmed areas.This requires immediate attention from policy makers to consider the management of both farmland and adjacent natural habitats that may support pollinators.The farmers suggested that a reduction in the use of pesticides would be beneficial and therefore alternative strategies for pest control need to be developed and can be considered a priority for research.In summary, our paper tests a new approach to validate factual and inferential indigenous and local knowledge in the context of an environmental issue that is both local and global in nature.This process will increase the opportunities for communities to contribute to evidence based management strategies.In our case study, there is as yet little relevant scientific knowledge to integrate with the indigenous and local knowledge locally – there is a large scientific knowledge gap relating to trends in pollinator abundance in India and there is little information on key crop pollinators for vegetables.The validated indigenous and local knowledge represents the majority of what we know.This is likely to be the case for many environmental issues, when considered at specific locations and for human livelihood.A lack of long-term data on pollinator abundance and distribution typifies the situation in much of the world.In areas where data are poor, local knowledge will form the basis of the evidence for determining local conservation needs and strategies.Using this method we captured valuable data that can be used to inform strategies for both policies and management, despite there being little scientific data available, and initiated a shared platform for farmers and scientists to begin to work on a problem that affects us all.
There is an important role for indigenous and local knowledge in a Multiple Evidence Base to make decisions about the use of biodiversity and its management. This is important both to ensure that the knowledge base is complete (comprising both scientific and local knowledge) and to facilitate participation in the decision making process. We present a novel method to gather evidence in which we used a peer-to-peer validation process among farmers that we suggest is analogous to scientific peer review. We used a case-study approach to trial the process focussing on pollinator decline in India. Pollinator decline is a critical challenge for which there is a growing evidence base, however, this is not the case world–wide. In the state of Orissa, India, there are no validated scientific studies that record historical pollinator abundance, therefore local knowledge can contribute substantially and may indeed be the principle component of the available knowledge base. Our aim was to collate and validate local knowledge in preparation for integration with scientific knowledge from other regions, for the purpose of producing a Multiple Evidence Base to develop conservation strategies for pollinators. Farmers reported that vegetable crop yields were declining in many areas of Orissa and that the abundance of important insect crop pollinators has declined sharply across the study area in the last 10–25 years, particularly Apis cerana, Amegilla sp. and Xylocopa sp. Key pollinators for commonly grown crops were identified; both Apris cerana and Xylocopa sp. were ranked highly as pollinators by farmer participants. Crop yield declines were attributed to soil quality, water management, pests, climate change, overuse of chemical inputs and lack of agronomic expertise. Pollinator declines were attributed to the quantity and number of pesticides used. Farmers suggested that fewer pesticides, more natural habitat and the introduction of hives would support pollinator populations. This process of knowledge creation was supported by participants, which led to this paper being co-authored by both scientists and farmers.
491
Cell type specificity of tissue plasminogen activator in the mouse barrel cortex
We have shown that tPA expression in the somatosensory cortex is dependent on the sensory experience of the animal .Here, using double immuno labeling for tPA and makers of specific cell populations we provide the data showing cell type-specific tPA expression and that double-labeling patterns are not fixed.Adult male CD-1 mice was anesthetized using a 130/10 mg/kg ketamine–xylazine cocktail.After the animal reached sufficient anesthetic state, it was head-fixed in a stereotaxic apparatus followed by a small craniotomy over the barrel cortex.Coordinates were obtained from a mouse brain atlas.A precision micro-manipulator was then used to insert a high-impedance tungsten microelectrode into the cortex such that neural activity from thalamic recipient Layer IV neurons was recorded to disc using a multi-channel data acquisition device at a rate of 30 kHz for 1.5 h. Multiple cortical penetrations were made at a depth of 450 μm in order to evaluate whisker evoked responses in layer 4 of the barrel cortex.Animals were deeply sedated using intraperitoneal injection of Euthasol until unresponsive to toe pinch.Transcardial perfusion was then conducted with 0.01 M phosphate buffer saline followed by 4% paraformaldehyde in 0.01 M PB.Fixed brains were kept overnight in PFA at 4 °C, then sectioned using a vibratome at 60 μm at room temperature.Free floating sections were rinsed 3× for 10 min each in 0.01 M PBS before and after each of the following steps.All double-immunofluorescence experiments were performed with sequential staining instead of mixing primary antibodies in cocktail.For double-labeling with microglia, tissue was permeabilized and blocked using 0.5% Triton-X and 5% normal donkey serum for 1 h at room temperature followed by 1:1000 Iba1 primary antibody in 0.01 M PBS and 2% NDS for three nights overnight at 4 °C.Slices were then incubated in a cocktail of 2% NDS and 1:200 Alexa 594 Antigoat secondary antibody for 2 h. Following incubation in 0.5% Triton-X and blocking in 5% normal serum for 30 min, slices were incubated in 1:100 tPA primary antibody with 2% NDS for three nights overnight.Slices were then incubated in a cocktail of Dylight 488 Antirabbit and 2% NDS for 2 h at 37 °C.Lastly, the brain tissues were counterstained with Hoechst for 30 min, mounted onto gelatin subbed slides and cover slipped using Vectashield.Negative controls were conducted by following the aforementioned procedures, but leaving out either the primary antibodies, or the secondary antibodies.No non-specific background labeling of cells were detected in the control tissue.For neuronal double-labeling experiments, somatostatin were identified with intrinsic GFP from GIN mouse45704Swn/J, purchased from Jackson Laboratory, Bar Harbour, Maine).For the double-immunohistochemical portion of the study, a separate group of animals were used to investigate the colocalization profile of tPA with parvalbumin and neurogranin immunopositive cells.The parvalbumin and neurogranin immunostaining was conducted as follows: Tissues were blocked in 5% normal donkey serum with 0.3% Triton X in 0.01 M PBS, then incubated in anti-parvalumin or anti-neurogranin for 24 h at 4 °C in 0.01 M PBS.After rinsing in 0.01 M PBS, the tissues were submerged in a secondary antibody for 2.5 h in room temperature.Afterwards, tPA immunofluorescent staining protocol was followed as previously described, with the exception of replacing the primary antibody and fluorophore to streptavidin with Alexa 647.Last, Hoechst staining was performed, and brain slices were extensively washed, air dried, dipped in distilled H20, and covered-slipped as previously described.Imaging was conducted using a confocal microscope using the Alexa 594, FITC and DAPI filter using a 60× lens.0.5 μM z-step stacks were taken and the data was analyzed offline.The colocalization function was used to determine the degree of overlap between the different channels at individual pixels.Colocalization analysis yielded a Pearsons correlation for the pixel intensities between the two channels.
We provide data in this article related to (C.C. Chen et al.,. Neurosci. Lett., 599 (2015) 152-157.) [1] where the expression of tissue plasminogen activator (tPA) is expressed by the whisker representation in the somatosensory cortex. Here, we provide immunocytochemistry data indicating that tPA is expressed by putative excitatory neurons as well as parvalbumin+ interneurons but not by somatostatin+ inhibitory interneurons. We also provide data showing that microglia do not normally express high levels of tPA, but upregulate their levels following cortical penetration with a recording electrode.
492
Reconstructing regional population fluctuations in the European Neolithic using radiocarbon dates: A new case-study using an improved method
Population size and density are key variables in human evolution.They represent important outcomes of evolutionary adaptation, and have strong feedback relationships with key processes such as: the transmission, selection and drift of both genetic and cultural information; infectious disease dynamics; land and resource use; niche construction; economic cycles and sustainability.To understand human evolution it is therefore necessary to estimate regional population fluctuations, and to identify their causes and consequences.Major advances are now being made in this field due to the growing availability of modern and ancient genetic data and associated modelling approaches.However, estimates of population size from these data generally lack adequate chronological and/or spatial resolution, or the data are too few in number, to draw meaningful inferences about their relationship with these key processes.Directly dated archaeological site information does not suffer from these problems but, with some recent exceptions using cemetery age distributions, Zimmermann et al. using site spatial distributions, and Hinz et al. using summed radiocarbon probabilities), archaeologists, in Europe at least, have been strikingly reluctant to make demographic inferences from such data, and are generally keener to emphasise the pitfalls than the possibilities."When Rick proposed using summed date distributions as data for the purpose of reconstructing spatial-temporal variation in coastal-highland settlement practices during the Peruvian preceramic period, an important new weapon was added to the archaeologist's armoury.In his inferential chain, Rick laid out three main assumptions that underpin this approach; firstly that more dateable objects will be deposited during periods when the population was larger, secondly that more deposits will lead to more objects preserved in the archaeological record, and thirdly that more preserved objects will lead to more dateable material eventually recovered by archaeologists.Joining these together gives us the assumption of a monotonic relationship between the population size and the amount of radiocarbon dates recovered."Therefore a suitable radiocarbon database can be used to construct a time-series by summing each date's probability distribution, and the fluctuations in this time-series can then be used as a proxy for changing population size.Of course the extent to which these assumptions are satisfied can be difficult to determine.The law of large numbers predicts that larger sample sizes should more fairly represent the archaeological record, but this may already be a taphonomically biased representation of the original deposits.Some control can be achieved by using radiocarbon dates from a confined spatial region, small enough for taphonomic losses to be considered spatially homogenous.However, this necessarily reduces sample sizes, and so a balance must be found.Even in this simple case, where the analysis deals only with a local pattern, we can expect constant homogenous taphonomic losses to manifest as a gradual loss over time in the archaeological record, and therefore a long-term exponential increase in the summed distribution."Whilst the utility of this approach is reflected in its increasing application, the biases and assumptions noted in Rick's chain of inference have also been subject to increasing critical scrutiny.Three major issues that persist are; the impact of sample size, fluctuations in the radiocarbon calibration curve – which have the effect of concentrating dates in some time periods and spreading them out across others – and the effect of differential taphonomic and archaeological recovery processes on what is available for dating.In our previous study we have shown that many of the problems and biases raised by the standard approach of summing radiocarbon dates can be resolved.Despite this, criticisms persist; most recently for example, Contreras and Meadows again raise these concerns."The authors simulate a radiocarbon dataset by sampling from a prior ‘assumed true’ population curve, and then comment on the dissimilarity between the sampled summed probability distribution and the ‘true’ population curve from which it was sampled.In principle this is a sensible approach, which should be expected to demonstrate good congruence as the sample size increases; however the authors argue the contrary, that there is poor congruence, and conclude the method is unreliable.There is a simple explanation for this.Because of the interference effect of wiggles in the calibration curve, spurious fluctuations exist on a scale below c.200 years, rendering this method quite useless for any time-series shorter than a few thousand years.This is simply a matter of analysing at the appropriate scale – the effect of these wiggles is invisible and irrelevant at the scale of tens of thousand years.As with our previous study, we apply this method to dates spanning several thousand years, before trimming the summed distribution down to a 4000 year period of interest, to avoid edge effects.Furthermore we plot a 200 year rolling mean, to discourage the reader from over-interpreting smaller scale features.In contrast, Contreras and Meadows invoke a straw man by simulating dates over the inappropriately short time ranges of 700 years and 800 years, so that the shape of their distribution is dominated by these spurious short-term wiggles.They obfuscate matters further by plotting the simulated distribution over a wider 1200 year range, so as to include yet more spurious edge effects outside the range covered by the sampled data.Shennan et al. also showed that a more comprehensive Monte-Carlo simulation-based method, which generates simulated date distributions under a fitted null model, can be used to test features in the observed dataset for statistically significant patterns.The results of this Monte-Carlo Summed Probability Distribution method can be supplemented by comparing the radiocarbon population proxy with other proxies, based on independent evidence and different assumptions.Thus, Woodbridge et al. compared this population proxy for Britain with independent evidence for forest clearance, based on pollen analysis, which serves as an indicator of human environmental impact and hence population size, and found a strong correlation: peaks in the summed date distribution correspond to more open environments and troughs to more extensive forest cover.Other studies of the European Neolithic have produced the same result.Shennan et al. addressed the question of whether the arrival of farming in the different regions of Europe was associated with a significant departure from a fitted null-model of long-term exponential growth that characterises both global population history and the increased survival of the archaeological record towards the present.Results for the majority of the European regions showed significant departures from this null model, and indicate that boom and bust fluctuations followed the arrival of farming.The occurrence of population booms – periods of rapid population growth – associated with the local arrival of farming was unsurprising, on the basis of both theory and inferences of increased growth rates derived from cemetery age-at-death distributions.However, the consistent evidence for population ‘busts’ contradicts standard views about the long-term impact of agriculture on population levels.Furthermore, cross-correlation of the population fluctuations with climate data did not support the hypothesis that the fluctuations were climate-driven.This paper pursues a similar agenda by examining dates from another twelve European regions to see if they continue to support the boom-bust pattern, but does so by means of an improvement to the existing MCSPD-method that was presented in Shennan et al.We provide a detailed description of the improved method, and demonstrate its power using one of the twelve regions as a test set, by progressively sampling smaller and smaller training datasets and comparing the results.Finally, we examine the population reconstructions for the twelve regions and discuss their implications.As with our previous study, radiocarbon dates for each study area were selected from the EUROEVOL project database.Once again we used a fully inclusive approach on the basis that inaccurate dates would obscure any genuine underlying patterns, thus having a conservative effect, and that the larger the sample, the closer it will approximate the true distribution.By definition, approximately 5% of any SPD will be falsely considered unusually high/low density by the existing method, and reported as locally significant.This is because 5% of any random data falls outside its 95% confidence interval, and can be loosely considered as ‘false positive’ points on the SPD plot, in so far as all the red/blue regions can be considered ‘positive’ points.A ‘global’ p-value informs us if overall there is a significant departure from the null for the entire time series, since this p-value is estimated by comparing a single global summary statistic from the observed SPD with a distribution of the same statistic from all the simulated SPDs.However, we are left with difficulties in interpreting precisely which time intervals truly depart from expectation under the null-model, and which are ‘false’.Here we introduce an additional function that seeks to filter out the points that are most likely to be ‘false positives’ from both the observed SPD and each simulated SPD.The new ‘false positive remover’ function uses the principle that the ‘false positive’ points were not caused by an interesting underlying signal in the data, and instead are randomly distributed through a SPD.Therefore the ‘false positives’ are more likely to occur alone than in pairs, more likely in pairs than triplets etc.In contrast, ‘true positives’ in an observed SPD are more likely to have low entropy, i.e. exhibit order, and therefore occur in blocks, since they are caused by some underlying population process.The new function therefore filters out single positive points on an SPD, followed by points with only one neighbouring point, until a maximum of 5% of the SPD has been removed from the category of positive.The immediate affect of this is to selectively reduce the amount of red/blue in a manner that tends to preferentially reduce the false positive points in the observed SPD, thus improving the specificity of the plots.However, the new function is also applied equally to the simulated SPDs, which has a very different effect, since the simulated SPDs are used to calculate the global p-value.The simulated SPDs are assigned fewer positive points, which lowers the summary statistic threshold used in calculating the global p-value.This results in the test having increased sensitivity, since it is now better at successfully rejecting an incorrect null.Overall these two effects combine to substantially increase the power and usefulness of the method.We use one of the twelve regional datasets – Eastern Middle Sweden to test the efficacy and statistical power of the improved MCSPD-method, which includes the addition of the ‘false positive remover’ function.The EMS dataset was selected as a suitable candidate for this demonstration since it exhibits a simple and coherent pattern, which despite the small sample size corresponds well to that seen in a much larger sample of dates from the same region.Initially the full EMS dataset was analysed using all 93 samples.This was repeated for 7 further tests, with each subsequent subset being one third smaller than the previous subset, but always randomly sampled from the original dataset.The rationale behind this approach is that by assessing the similarity of the SPDs between subsets, we can estimate the minimum sample size required to recover a shape that is fairly representative of the 100% SPD.This also provides a way of critically assessing the p-values generated by the method, since both the p-value for each subset, and the similarity of shape between subsets, are indicators that the shape is not a random artefact of a small sample.This demonstration utilises the consequences of the law of large numbers, which predicts that as sample sizes increase, the sample distribution approaches the true distribution; therefore as sample sizes decrease, the shapes of the sample distributions will become increasingly different.The results show remarkable similarity in the broad scale shape of all SPDs, even at a sample size of 6% of the full dataset, comprising just six 14C dates across 4000 years.Local regions of high density also exhibit good synchronicity across all random subsets.When a strong pattern of clustering is present in the data the global p-values suggest that the improved method has the statistical power to detect significance with sample sizes as small as 12 dates across 4000 years.At this level the similarity with the full dataset is remarkable, both in terms of the shape of the SPD and the local regions of significance.Regions of low density are sparse in the full dataset and disappear entirely on subsequent smaller subsets.This phenomenon is driven by the small sample size and is a consequence of the fact that absence of evidence is not evidence of absence in a small dataset, whilst as sample sizes increase the contrary becomes true — the absence of dates increasingly becomes evidence of absence.In contrast to the results from Fig. 2, Fig. 3 shows that the same method applied to a similar sample size for Swedish Baltic Islands revealed no significance.This might seem counterintuitive since there are clear fluctuations in the SPD, and seems a good candidate for subjective disagreement about the extent to which those fluctuations are significant, or merely the random effects of sampling.Clearly the method has performed conservatively in supporting the latter interpretation.The population reconstructions and associated global significance values for the 12 regions in this study, using the improved method, are shown in Fig. 3.We used the same null model as in the previous study – an exponential fitted to the SPD from all 13,658 dates in the study area across the wider range of 10,000–4000 BP.As in our previous study the great majority of the regions show evidence of departures from the overall European exponential trend, with indications of population booms and busts, and eight of the 12 regions show strong indications of a statistically significant population increase immediately following the arrival of farming, with a ninth region also showing some indication of this increase.However, it is important to note that whilst many red regions appear to begin and end with periods of rapid growth and decline, the declines in most cases are not highlighted in blue to show significantly low density.This is unlike the results in our previous study where periods of significantly high density were in many cases soon followed by periods of significantly low density.This difference is almost certainly driven by the smaller sample sizes, and further explained in the demonstration section, in terms of evidence of absence.In any case strictly speaking, the p-value globally tests for a statistically significant departure from the null exponential model, whereas the interpretation of booms and busts are reasonable but subjective inferences drawn from the shape of the SPD and its highlighted time periods of ‘positive’ points.Therefore given the much smaller sample sizes in this study it seems more appropriate to interpret ‘bust’ as a marked fall in density, rather than a significantly low density in itself.As noted above, in all regions where the comparisons have been made, the population reconstructions have been supported by the lower resolution pollen evidence of changing anthropogenic impacts.We can now turn to the individual regions.England and Wales without Wessex shows the same pattern as the other regions of the British Isles with a boom following the arrival of farming at c.6000 BP followed by a crash down to a level little more than half the preceding peak.The population remains at this relatively low level for nearly 800 years, starting to climb again at c.4500 BP, in a pattern closer to that of Ireland than Scotland or Wessex.The pattern for western France again indicates a population boom with the appearance of farming in the early 7th millennium BP.Here the peak is at around 6000 BP and it is apparent that the subsequent decline corresponds in time to the population expansion in England and Wales, possibly suggesting a link between the two regions.Over the course of the next 800 years, apart from a brief uptick just after 5500 BP, the population proxy gradually drops to around two-thirds of its earlier peak at 5200 BP, a low point also observed in several other regions, though in France the rate of decline is less marked.The final drop at the end of the sequence may perhaps be related to a tendency to rely more on typology than radiocarbon dates among scholars working on the Bronze Age.The Lowlands, Netherlands and Belgium, excluding the coastal regions subject to Holocene inundation, show a complex pattern of booms and busts, starting in the Mesolithic, with a peak at the start of the sequence at 8000 BP followed by a drop to half the peak level.A boom with the arrival of LBK farming groups in parts of the region at c.7300 BP is followed by another sharp drop in the early 7th millennium BP to less than half the LBK peak level; this corresponds to other evidence for the abandonment of the LBK areas of the Netherlands at this time.Population then increases again as agriculture gradually infiltrates this region before another drop in the second half of the 6th millennium BP, roughly synchronous with a drop in a number of other regions, as noted above.This in turn is followed by a rapid rise to peak at c.4700 BP, a level maintained for c.300 years, before a further sharp drop, though it cannot be excluded that different dating practices are relevant here.Of the four Swedish regions, three indicate a population boom following the appearance of agriculture at around 6000 BP, followed by a rapid decline.However, while Western and Eastern Middle Sweden rise rapidly to a peak in the middle of the 6th millennium BP, in Central Southern Sweden the rate of growth is much slower, reaching a peak just after 5000 BP before declining to half the peak level.Western Sweden shows a second peak at the same date, before declining very rapidly to less than one-third of the maximum.Eastern Middle Sweden declines steadily after the mid-6th millennium BP high point, to roughly half this level by 5000 BP.There is no evidence of a significant departure from expectation under the null model in the Swedish Baltic Islands.Three of the four Swedish regional patterns are very similar to those shown in Hinz et al.; the exception is the Swedish Baltic Islands, for which the similarity is less clear, and which in any case is lacking in significance.The two Polish regions, Kujavia and Little Poland, again show significant departures from the null.In Little Poland there is a rapid rise in population, reaching a peak just after 5000 BP, which is long after the initial appearance of LBK farmers in the region around 7500 BP.There is a further peak in the middle of the 5th millennium BP after a slight dip, and then a major fall-off to less than half the maximum, though it is possible that Bronze Age scholars take less radiocarbon samples.Kujavia has a more complex pattern, with a significant rise from below trend associated with the first farming, but the highest values occur in the mid 6th millennium BP, and are double the LBK levels.There is then a drop in the late 6th millennium BP, as in many other regions, before a short-lived peak at c.4800 BP.In eastern Switzerland a major population boom is indicated with the arrival of farming in the late 7th millennium BP and a rapid decline in the mid 6th millennium BP, corresponding to that seen in the other three regions considered above.The subsequent rise to an equally high but more short-lived peak at c.4800 BP and the following major fall are corroborated by corresponding patterns in anthropogenic impacts inferred from pollen analysis for part of the same region.Of the two remaining regions, Bohemia shows no evidence of departure from the exponential trend.On the other hand, Moravia, which includes the adjacent area of Lower Austria, shows a dramatic population increase associated with the arrival of LBK farming c.7400 BP, followed by a rapid fall to little more than half the peak value just after 7000 BP, and then another immediate expansion to a peak at c.6600 BP before a collapse to a small fraction of the peak in the last centuries of the 7th millennium BP.We do not see a return to these previous high values, and there is a significant drop below trend in the early 5th millennium BP.The density of radiocarbon dates in a dataset, and how this varies through time, provides a useful proxy for fluctuations in human population levels.This approach made a major step forward with the use of the new computational method presented in Shennan et al.For the first time statistical rigour was introduced using the MCSPD-method, providing confidence in whether the observed fluctuations could be considered significant, or were merely the consequence of sampling error and features in the calibration curve.This new tool allowed us to interrogate the EUROEVOL database containing nearly 14,000 dates in unprecedented spatial detail, revealing boom and bust patterns that followed the first appearance of farming in many different regions of Europe.This paper builds on that earlier groundwork, and introduces an improvement to the computational method, providing greater confidence that the patterns detected represent a genuine signal of changing population levels.The efficacy of this improved method is demonstrated by its ability to detect a statistically significant signal in remarkably small datasets-only 12 samples across 4000 years in the particular dataset tested.Equipped with this improved tool, we have therefore been able to assess 12 new and independent regions of the European Neolithic.Despite these new regions containing substantially fewer samples, we have again found evidence of statistically significant fluctuations in regional population levels, occurring at different times in different regions.Our results provide compelling support for the argument proposed in the previous study, that boom-bust fluctuations rapidly followed the appearance of farming.The prevalence of this phenomenon in so many regions across Europe, combined with the dissimilarity and lack of synchronicity in the general shapes of the SPDs, supports the hypothesis of an endogenous, not climatic cause.
In a previous study we presented a new method that used summed probability distributions (SPD) of radiocarbon dates as a proxy for population levels, and Monte-Carlo simulation to test the significance of the observed fluctuations in the context of uncertainty in the calibration curve and archaeological sampling. The method allowed us to identify periods of significant short-term population change, caveated with the fact that around 5% of these periods were false positives. In this study we present an improvement to the method by applying a criterion to remove these false positives from both the simulated and observed distributions, resulting in a substantial improvement to both its sensitivity and specificity. We also demonstrate that the method is extremely robust in the face of small sample sizes. Finally we apply this improved method to radiocarbon datasets from 12 European regions, covering the period 8000-4000BP. As in our previous study, the results reveal a boom-bust pattern for most regions, with population levels rising rapidly after the local arrival of farming, followed by a crash to levels much lower than the peak. The prevalence of this phenomenon, combined with the dissimilarity and lack of synchronicity in the general shapes of the regional SPDs, supports the hypothesis of endogenous causes.
493
Increasing engagement with an occupational digital stress management program through the use of an online facilitated discussion group: Results of a pilot randomised controlled trial
In the UK prevalence rates for work-related stress, depression and anxiety are high, accounting for 11.7 million lost working days and resulting at both a clinical and a sub clinical level in reduced work performance and absenteeism.There is evidence that these conditions are both preventable and treatable in the workplace.A recent meta-analysis has shown that digital mental health interventions delivered in the workplace can be effective at reducing psychological distress and increasing workplace effectiveness; however, despite examples of occupational digital mental health interventions that have achieved good adherence one of the challenges of digital mental health still remains increasing adherence and engagement.While digital interventions are typically designed for widespread accessibility, uptake can be low and the discontinuation curve steep.A randomised controlled trial of a digital mental health intervention delivered in the workplace reported that only 5% of participants started one or more of the modules, and a trial of digital mindfulness delivered in a workplace reported that between 42% and 52% of all participants in the active conditions never logged on to the program.Carolan et al. found that the mean highest reported completion across 19 studies in their meta-analysis was 45% with a range of 3% to 95%.Research has consistently shown that providing guidance can lead to greater adherence to web-based interventions.An online facilitated discussion group may be one way of providing that guidance in a time efficient way.Previous studies have incorporated discussion groups into their interventions but have failed to identify the impact of the group on the effectiveness of the intervention.In this study we therefore compare engagement with a minimally supported CBT based digital mental health program delivered in the workplace with and without access to a facilitated discussion group, and to a wait list control, and explore whether increased engagement suggests increased effectiveness.The trial was conducted as a pilot trial to gain greater confidence in predicting effect size, refining optimum engagement of the intervention, understanding accuracy of engagement measures, and understanding the challenges of conducting the trial in the workplace.A three-arm randomised controlled trial was conducted comparing a minimally supported web-based CBT based stress management intervention delivered with and without an online facilitated bulletin board, with a wait list control.Randomisation was conducted on a ratio of 1:1:1.All participants had unrestricted access to care as usual.The trial was conducted to examine the effect of an online facilitated discussion group on engagement with a minimally supported digital stress management intervention delivered to employees, and to look at the estimated potential effectiveness of the program.Assessment took place at baseline, at post treatments and at follow-up.Participants in the active conditions completed a credibility and expectancy questionnaire at two weeks following randomisation.All assessments were completed online.This trial was conducted and reported in line with the CONSORT eHealth checklist.Further information about this trial is available from the trial protocol.The study was approved by the University of Sussex Science and Technology Cross-School Research Ethics Committee, and registered with ClinicalTrials.gov NCT02729987.UK based organisations that had subscribed to the WorkGuru mailing list were invited to participate in this study.Participating organisations circulated a statement to staff inviting them to follow a link or contact the first named author for more information.Participating organisations were encouraged to offer employees a minimum of 1 h a week over the eight-week period to complete the program.Participants who were: i) aged 18 or over, ii) employed by a participating organisation, iii) willing to engage with a web-based CBT based stress management intervention, iv) had access to the Internet, v) had access to a tablet or computer, vi) had an elevated level of stress, as demonstrated by a score of ≥ 20 on the PSS-10, were recruited to the study between March and June 2016.No exclusion criteria were set.The cut off of 20 on the PSS-10 represents one standard deviation above the mean in a large US general population sample.Participants who met the inclusion criteria were invited to complete a baseline questionnaire that was completed online.A consent statement was included on the front page of the questionnaire; participants gave consent to take part in the study by completing the questionnaire.Participants were informed that their participation was confidential and their organisation would not be informed of which employees were participating in the study.On completion of the baseline questionnaire, participants were randomised to one of the three study arms.An allocation schedule was created using a computer generated randomisation sequence.An independent researcher allocated each group as an active condition or the WLC.The study researchers were blind to the group allocation.Participants allocated to the Minimal Support Group were able to access the intervention immediately.Participants allocated to the discussion group were also able to access the intervention immediately, but were asked to wait for up to three weeks for the start of the group.The delay in starting the facilitated group was to enable an optimum number of participants to begin the group together; participants were encouraged to access the bulletin board and take part in an introductory exercise while they were waiting for the group to start.Participants allocated to the WLC were able to access the intervention after 16 weeks.A more detailed description of the web-based CBT based stress management program WorkGuru is available from Carolan et al.The program was presented on a secure platform that participants logged-on to using an email address and a self-generated password.The eight-week program was based on the psychological principles of CBT, positive psychology, mindfulness and problem solving.It consisted of seven core modules that all participants were encouraged to complete and three additional modules.The core modules included information and exercises on stress, resilience, values, cognitive restructuring, automatic thoughts, unhelpful thinking styles and time management.The additional modules contained information on mindfulness, problem solving and imagining the future self.Participants completed the modules at their own pace.They could either complete a questionnaire and receive suggestions of which modules that they might find useful, or choose the modules that they wished to complete themselves.The modules consisted of a combination of educational reading, audio, short animations and interactive exercises.Participants could also complete eight self-monitoring standardised questionnaires, including the Perceived Stress Scale, the Subjective Happiness Scale, and the Brief Resilience Scale.They were also able to opt-in to a weekly motivational email that contained a motivational quotation and advice on staying well in the workplace, and could set themselves email reminders to visit the site.To encourage engagement, an e-coach contacted the participants through the site when they first logged-on, at two weeks, and at six weeks.Messages from the coach were all personalised.Participants could choose to share work with the coach and could contact the coach for information or advice.The coach responded within 24 h.While using the WorkGuru site, users were prompted to contact their GP, NHS 111 or the Samaritans if they were concerned about their mental health.Contact details for NHS 111 and the Samaritans were given.Participants allocated to the MSG had access to the intervention as described above.Participants allocated to the discussion group had access to the intervention as described above; they also had access to an eight-week online guided discussion group that was delivered via a bulletin board.Each week the coach introduced one or more of the modules and encouraged discussion about the topic.Participants chose a user name, and were able to be anonymous in the group.The primary outcome measure was engagement, which was measured using the number of logins to the site.The number of logins was chosen as the primary outcome measure because it is the most commonly reported objective exposure measure used in studies of digital health.Secondary measures included further measures of engagement, and of psychological outcomes: a measure of depression, anxiety and stress and a measure of wellbeing at work.DASS-21 is a 21-item scale that was designed to measure the negative emotional states of depression, anxiety and stress.Items are answered on a 4-point Likert scale."Cronbach's α for the subscales at baseline were: depression α = 0.88; anxiety α = 0.90; stress α = 0.84 in this study.The IWP Multi-Affect Indicator is a measure of wellbeing at work.It is a 16-item scale that is scored on a 7-point scale.Participants are asked the approximate amount of time they have felt different emotions during the week.The subscales for depression and anxiety are reverse scored, resulting in higher scores representing higher wellbeing."Cronbach's α for the subscales at baseline were: enthusiasm α = 0.87; anxiety α = 0.90; comfort α = 0.74; depression α = 0.84 in this study.Other measures taken were: client satisfaction, which is an eight-item questionnaire that is rated on a 4-point scale with reverse scoring on four items.The questionnaire was developed to assess general satisfaction with services, α = 0.95 in this study; acceptability which is a six-item questionnaire that is rated on a five-point scale, α = 0.62 in this study; treatment credibility and patient expectancy, which is a six-item questionnaire that utilises two rating scales, one from 1 to 9 and the other from 0 to 100%.Participants are asked what they thought or felt about the treatment.The measure achieved α = 0.92 in this study; system usability, which is a ten-item questionnaire, rated on a five-point scale.Five of the items are reverse scored, and the sums of the scores are multiplied by 2.5 to obtain an overall value.A score of < 50 would be regarded as a cause for significant concern; scores above 70 are seen as acceptable, with scores in-between suggesting the need for continued improvement.In this study α = 0.92; negative effects of treatment, using one-item developed for this study, which asks the question: “What, if any, positive or negative effects caused by the program/being in the control group did you experience?,Possible moderators explored were: goal conflicts, using the goal conflict index developed for this study."This is a three-item questionnaire that is rated on a five-point scale, α = 0.59; job autonomy, using the nine-item autonomy subscale from the Work Design Questionnaire, which is rated on a five-point scale, Cronbach's alpha for the subscales at baseline were all α ≥ 0.83 in this study; time perception a 5-item questionnaire, which is rated on a five-point scale, α = 0.74 in this study; levels of psychological distress at baseline as measured on DASS.Engagement measures specific to the discussion group were taken as well as the Online Support Group Questionnaire, which is a nine-item questionnaire that is rated on a ten-point scale."Cronbach's alphas for the subscales were α > 0.77 in this study.Existing psychological illness, CAU, sickness absence for stress related complaints, and contamination between the groups were monitored.Demographic measures included age, gender, fluency of written and spoken English, country of birth, relationship status, work role, number of working hours, organisation, education level, income bracket and familiarity with the online environment.All analyses were performed using SPSS version 22.Due to the pilot nature of this study descriptive information was presented; exploratory inferential analyses were conducted using ANCOVA and t-test as appropriate.Analyses of the primary and secondary outcome measures were conducted on an intention-to-treat basis; sensitivity analysis included a per-protocol analysis.Per-protocol was defined as three or more logins to the WorkGuru site.A significance level of 0.05 was used for all analyses."Cohen's d using pooled standard deviations, and 95% CIs were calculated.Effect sizes were interpreted using the classification given by Cohen.Outliers > 3.29 standard deviations away from the mean were identified.Missing data was imputed using the Last Observation Carried Forward method.Baseline differences between groups were explored using chi-square and ANOVA.Individuals who had subscribed to a WorkGuru marketing mailing list while attending conferences were invited to nominate their organisation to take part in the research.Nineteen organisations expressed an initial interest; none of which had previous experience of WorkGuru.Six of the organisations were recruited into the study.All six organisations were UK based: two were local authorities, two were universities, one was a third sector organisation, and one was a telecommunication organisation.Participating organisations directed staff to information and promoted the study through emails, intranet, in-house magazines and newsletters.The marketing statement used by the organisations gave a brief description of the intervention and emphasised that participation would be entirely confidential.Fig. 1 summarises the recruitment and flow of participants through the study.Of the 135 individuals who were assessed for eligibility, 23 were excluded because they scored ≤ 19 on PSS-10, and 28 were excluded because they did not compete the baseline measure.A total of 84 individuals were randomised.Two individuals withdrew from the study after randomisation: one reported changing jobs and the other reported an increase in workload, which meant he/she would not have time to participate in the study.For all the engagement measures, the data was gathered through the web-based program.Two participants did not create an account for themselves, resulting in data being available for 80 of the 82 participants.Of the 82 participants, 62 completed questionnaires at 8 weeks after randomisation, and 70 16 weeks after randomisation.Of the 54 participants in active conditions, 36 completed the credibility and expectancy questionnaire 2 weeks after randomisation.Chi-square tests found the groups did not differ in regard to missing data.Participants who provided data at T2 and T3 did not differ from those who did not on baseline scores of depression, anxiety of stress, or on gender or allocated group.Demographic data for all study participants are displayed in Table 1.A significant difference was found between the randomised groups on both the occupation and the highest qualification variables.Sensitivity analysis was run with highest qualification as a covariate; no effect was found.No other differences were found between the groups on demographic information or levels of depression, stress or anxiety at baseline.Mean levels of depression, anxiety and stress for participants at baseline, as measured on the DASS, were moderate to severe for depression and moderate for both anxiety and stress.The average age of participants was 41.0.The majority were female, were born in the UK, were married or living with a partner, were in senior manager or administrator roles, and had at least a first degree.Participants had been in paid employment for a mean of 19.7 years.All were fluent in both written and spoken English.Most were fairly or very familiar with the online environment.Just under half of participants had a recent diagnosis of mental illness, with 33% currently taking medication for anxiety or depression.Previous experience of stress management training was reported by 48% of participants.Participants were asked on a scale of 1 to 10 how important is was to them to reduce their level of workplace stress.Over 87% of participants indicated 8 or above, with 51% indicating the highest score.Two of the six organisations that participated in this study provided demographic information.Comparing gender information, a larger number of females participated in the study than were in the workforce.One univariate outlier was found on each of the login and the page view variables; these were replaced with the group mean in each case.Sensitivity analysis indicates that if the outliers were not removed then the effect sizes remain in the same order of magnitude as reported below, but the CI for both the mean number of logins and the mean number of pages viewed no longer cross zero.Data for the primary and secondary engagement measures are shown in Table 2.The mean for each of the three engagement outcomes show a greater number of logins, modules completed and page views for the DG compared to the MSG.A medium between group effect size was observed for the primary outcome of login and for secondary outcome page views, and a small effect size was observed for modules completed.Confidence intervals for all outcome effect sizes crossed zero.No difference was found in the self-report engagement between the two groups.Descriptive data for both psychological outcomes at all three assessment points is shown in Table 3.Table 4 shows the between group effect sizes.At T2 a small between group effect size difference was found between both active conditions compared with the WLC on all three sub-scales of the DASS.No difference was found between the two active conditions.At T3 a small effect size difference was maintained between DG and the WLC on both the anxiety and stress subscales, and a small or medium between group effect size difference was maintained between MSG and WLC on all three subscales.Confidence intervals for all outcome effect sizes on the DASS with the exception of the T3 between group effect size between the MSG and WLC on the stress subscale, cross zero.At T3, small between group effect size differences were found between the two active conditions on both the depression and the stress subscales.Examination of the means suggests that the means for both depression and stress are smaller in the MSG.Findings from the IWP data suggest that there was a small effect size difference between both active conditions and WLC on the enthusiasm and comfort subscales at T2, which is maintained in the MSG group at T3, suggesting that there is an increase in enthusiasm and comfort in the active conditions and that this is maintained at T3 in the MSG group.Contrary to the DASS data, an effect size of zero or only a very small effect size was found on the depression and the anxiety subscales at T2.At T3 a small effect size difference is found on the anxiety subscale between both active conditions and the WLC.Small group effect sizes are also found at T3 between the two active conditions on both the anxiety and the comfort subscales.Examination of the means suggests that the improvements to both anxiety and comfort are in favour of the MSG group.Confidence intervals for all outcome effects sizes on the IWP measure crossed zero.Per-protocol analysis was conducted using data from participants who had logged into the program ≥ 3 times, and who had completed questionnaires.Protocol adherence was achieved by 70% of participants.Per-protocol analysis mirrored the effect size for the primary outcome number of logins.Results for the DASS showed larger effect sizes: at T2 a medium to large between group effect size was found between both active conditions and the WLC on all subscales of DASS, small to medium effect sizes were maintained at T3.The between group effect sizes for MSG and WLC at both T2 and T3 for the subscale stress were both significant effect sizes.The confidence intervals for all the other effect sizes crossed zero.At T3 a small to medium between group effect size was found between both the active conditions with the mean scores showing a lower level of depression, anxiety and stress for the MSG, confirming the findings in the ITT analysis that while participants in both active conditions have reduced levels of stress, depression and anxiety, participants in the MSG seem to benefit most from the intervention.Per-protocol analysis of the IWP data were consistent with the ITT analysis but showed larger effect sizes: a medium effect size difference was found between both active conditions and the WLC on both the enthusiasm and comfort subscales, at T3 a small effect size was maintained between MSG and WLC, confirming the finding that there was an increase in enthusiasm and comfort in the active conditions and that this was maintained in the MSG group at T3.At T3 a small to medium effect size was seen on all the subscales between the MSG and WLC.Examination of the means show an increase in enthusiasm and comfort and a decrease in depression and anxiety in favour of the MSG.A small effect size difference was found on all the subscales at T3 between the two active conditions.The mean scores confirm the ITT findings that participants in the MSG seemed to benefit most from the intervention.Confidence intervals for all outcome effect sizes on the IWP measure crossed zero.At T2 all of the 17 participants in the DG and only 17 of the 20 participants in the MSG group who provided data competed the client satisfaction and system usability questionnaires.Client satisfaction with WorkGuru was high, with 82% in the MSG and 71% in the DG rating the service that they had received as excellent or good.The majority of participants said that they had got the kind of service that they wanted, and that they would recommend the program to a friend.Participants in the MSG were more satisfied with the amount of help that they received and their general satisfaction with the service appeared to be higher.They were more likely to say that the service helped them to deal with their problems and that they would come back to WorkGuru if they needed help again.A small number of participants said that none of their needs had been met, and one participant in the DG said that the service seemed to have made their problems worse.The mean system usability score for DG was 68.4 and for MSG 76.0.Participants in both active conditions were given the CEQ at 2 weeks from randomisation.Intervention credibility and expectancy of participants about improvements was similar across both groups and for the MSG = 16.3; mean expectancy for DG = 12.2, and for the MSG = 14.8).Participants were asked at all three time points if they had taken time off sick for a stress related complaint in the last eight weeks.All groups had seen a fall between T1 and T3 in the number of participants who had been absent from work.For the DG the mean at T1 was 15%, at T2 18%, and at T3 5%.For the MSG it was T1 25%, at T2 0%, and T3 13%.For the WLC it was T1 29%, at T2 32% and T3 23%.Fig. 2 shows the self-report sickness absence for stress related complaints.Self-reported care as usual was examined to see if there were any differences between the three groups at the three time points.Participants accessed a range of support for their mental health problems including from GPs, counsellors, online self-help, psychiatrists, psychologists, occupational health nurses and doctors.No differences were found between the groups on the number or type of support accessed, or the number of participants who had been prescribed medication for anxiety or depression.A similar number of participants across the groups reported accessing online support for information.Possible moderators of engagement were explored.The means for participants on goal conflict, time pressure, job autonomy and level of psychological distress at baseline were calculated and the participants placed in groups depending on whether they were above or below that mean.Table 5 shows the mean number of logins for each of the groups and the between group effect sizes.The analysis showed a small effect size for goal conflict, time pressure and level of psychological distress at baseline.Examination of the means suggested that participants who reported lower goal conflicts, lower time pressure and lower psychological distress at baseline had a higher number of logins to the stress management program.No effect size difference was found between the two groups for job autonomy.Confidence intervals for all moderator analysis effect sizes crossed zero.Further exploratory inferential analysis was conducted on per-protocol data.No significant differences were found in t-tests between the active conditions on the number of logins, page views, messages sent by and to the coach and the number of modules completed.The ANOVA showed a significant effect of intervention on levels of stress at T2: F = 3.19, p = 0.049.Contrasts show that stress levels were significantly different for participants in both DG = 2.0, p = 0.050) and MSG = 2.2, p = 0.033) compared to WLC.This difference was maintained at T3 in MSG = 2.2, p = 0.032).No other significant difference was found on the psychological measures.Two eight-week guided discussion groups were delivered via a bulletin board.The first group had 16 participants and the second group had 10.The second group started five weeks after the first group started.The bulletin board was viewed 493 times by participants and 99 contributions were made: 57 by participants and 42 by the coach.The mean number of contributions made per participant was 2.2.An approximation of the time spent by the coach on each contribution that she made is 15 min; additionally approximately 30 min per week was spent by the coach logging in and monitoring each of the groups.This equates to just over 5 h per group spent by the coach in contributing to the discussion and 4 h per group on monitoring, which is slightly > 1 h of coach time per group per week or 41.5 min per participant across the eight-weeks.Results from the online support group questionnaire in which items were rated on a score of 1–10 where 1 means not at all and 10 means very much, indicated that participants were not very satisfied with the groups.Only two items rated at over 5 these were agreement that participants preferred to use aliases, and the relevancy of the topics chosen by the coach.During the course of this study, across both active conditions combined, the coach sent 185 individual coaching messages through the secure system and received 43 messages from participants.The content of the messages sent from participants were: acknowledging contact from the coach, reflecting on the content of the modules, sharing assignments asking a technical question, requesting extended access to the site, explaining absence, and questions about the research.Messages sent by the coach at initial log-on, two weeks and six weeks were based on a template, but personalised where possible.All responses to enquiries initiated by participants were personalised.An approximation of time spent by the coach on each message is 5 min, this equates to 15.4 h across the 8-week course spent by the coach on sending messages to participants in both the active conditions.The coach spent 18.7 min per participant sending, reading and responding to messages from the DG, and 17.0 min per participant in the MSG group.In the DG the mean number of messages sent by the coach directly to participants was 3.7, and in the MSG it was 3.4.In the DG the mean number of messages sent by participants to the coach was 1.3, in the MSG it was 0.37.There is a small between group effect size for the number of messages sent by the coach and a medium between group effect size for the number of messages sent by participants suggesting that more messages are sent by both the coach and participants in the discussion group.Participants were asked what if any positive or negatives effects were caused by being in an active condition or being in the control.Across both T2 and T3 participants in the DG identified eight positive effects and 13 negative effects.Across both T2 and T3 participants in the MSG identified 9 positive effects and 7 negative effects.Across both T2 and T3 the WLC identified 3 negative effects.Positive effects included: It made me think/know myself better, and: I liked the support from the coach/community."Negative effects included: I didn't have time to complete it, I found it stressful and: I felt guilty for not using it enough.The negative effects of being in the control were: Disappointment at being in the control and: Not having any contact with the coach.The extent of contamination between the groups was monitored by asking the extent to which participants had discussed the research with colleagues in other groups.At T2 94% of participants said not at all and 6% said a little bit.At T3 87% said not at all and 13% said a little bit.Results of this study support the effectiveness of an online facilitated discussion group in increasing the number of logins to a minimally supported digital stress management program.Medium between group effect sizes were found for both logins and page views, and a small effect size for the number of modules completed.No difference was found in self-reported engagement between the groups.Both the numbers of logins and page views seem to be a more sensitive measure of physical engagement with the program, but metrics such as login and page views may not necessarily measure the extent to which participants are psychologically engaged; clicking through a large number of pages may be a sign of disengagement as participants are not necessarily taking the time to engage psychologically with the content of the page."Self-report measures may be a more useful measure of engagement as they provide the user's assessment of their experience, but it is unlikely that the one-item self-report engagement measure developed for this study is sensitive enough to give a meaningful measure of the individual's experience.Results from this study suggest that the trend appears to be that access to the web-based stress management intervention is associated with lower levels of depression, anxiety and stress, and an increase in comfort and enthusiasm compared with the control condition and that these outcomes are largely maintained at follow-up.Participants who accessed the intervention without the discussion group seem to have potentially derived greater benefit.Per-protocol analysis confirms these findings.Further research may usefully explore this possibility by examining the influence of engagement within the individual groups.The effect sizes for the DASS outcomes in this study are in line with those reported in recent meta-analyses on digital stress management interventions and digital mental health interventions delivered in the workplace.Satisfaction with the intervention, and intervention usability was higher in the MSG than the DG.The intervention credibility and the expectancy of participants about improvements were similar across both active conditions, but satisfaction with the discussion groups was low.When recruiting to the study the intention was to run one discussion group of 30 participants.The size of the discussion group was based on previous experience at WorkGuru that suggested that a group of 30 optimised participant engagement.Because of the time that it was taking to recruit to the study, the decision was made to run two groups so that participants would not have to wait for more than three weeks for their group to start.When the group had started, new recruits were still able to join the group over the first two weeks.The smaller size of the groups, the delay in the groups starting, and the experience of participants joining the groups after they had started may have impacted on both the satisfaction with the groups, and the effectiveness of the groups in optimising engagement.Because of these problems with the study design we would suggest that our findings that participants accessing the intervention without a discussion group benefited most from the intervention be interpreted with caution, and that further research is conducted to examine the optimum size and other optimising factors for online facilitated discussion groups delivered alongside digital minimum support interventions.A small effect size difference was found between participants that reported both higher and lower levels of goal conflict, higher and lower levels of time pressure, and higher and lower levels of psychological distress at baseline.Examination of the means suggested that participants who reported lower goal conflicts, lower time pressures and lower distress login to the intervention more frequently.Organisations participating in this research were encouraged to offer participants 1 h a week to complete the program.Employers were not aware of which of their employees were participating in the study so it is unlikely that this message was reinforced to individual participants.Future research could look at whether within an occupational setting, prioritising and setting aside time for individual employees to access digital mental health programs increases the number of times that participants login to the intervention.The explorative inferential analysis confirmed our finding that access to the intervention resulted in a significant reduction in levels of stress at T2 and that this was maintained in the MSG at T3.In recognition that this is a pilot study, we suggest caution in interpreting these findings.For both the active conditions combined the coach spent a total of 15.4 h sending messages and responding to messages from participants, an additional nine hours per group was spent by the coach monitoring and contributing to the on-line discussion groups.If you combine the amount of coach time spent per participant in facilitating the two discussion groups with the time spent per participant sending, reading and responding to messages then each DG participant required a mean of 60.2 coaching minutes, and each MSG participant required a mean of 17.0 min.Group means and between group effect sizes show that more messages were sent between the coach and participants in the DG compared to the MSG suggesting that the additional time spent by the coach facilitating the discussion group does not result in less individual messages being sent; the discussion group may generate additional individual contact with the coach.Participants were asked what if any negative effects were caused by being in the group that they were allocated to.Participants in the DG identified almost twice as many negative effects of being in the group than the MSG."Some participants felt that the demands of the web-based program increased their feelings of stress as they felt guilty for not using the program enough, or felt that they didn't have time to complete it.Being in the group that accessed WorkGuru alongside a discussion group seems to have added to that strain.Further research is needed to gain a greater understanding of the extent to which the workplace is a suitable environment for delivering digital mental health programs.Do the benefits of digital mental health that have been identified in community and health settings translate as benefits in an occupational setting?,Or are there additional challenges to delivering these interventions in the workplace that need to be overcome?,This pilot study has enabled us to make a more confident but still tentative prediction of effect size for our primary outcome of engagement, we recognise however the limitations of using this effect size to determine sample size for a full trial.The pilot supports optimal adherence to the intervention as being ≥ 3 logins, and it supports the number of login and page views as being a useful measure of exposure to the intervention.Module completion does not appear to be a useful measure; this may be because exposure to anything < 100% of the module would not register as module completion whereas participants may benefit from the module without having visited every page.A subjective measure of engagement does appear to be useful, but a more comprehensive measure than the one item measure for this pilot should be used.IWP does not seem to be a measure that is sensitive to the between group changes intended by this CBT based stress management program, a future study should explore using an alternative measure of occupational outcome.One of the challenges of running this pilot study was the recruitment of organisations; out of 780 invitations to individuals to nominate their organisation to participate in the study, 19 organisations expressed an interest and six organisations were recruited.One explanation for this low take-up by organisations may be that the individuals on the mailing list were not in the position of authority or influence needed to put forward their organisation for the research.Between them, the six organisations taking part in the study recruited 84 participants; a future study may need to spend more time with organisations supporting them to maximise their recruitment of participants.Thought also needs to be given to recruiting into the discussion groups in order to minimise the wait for the groups to start and to ensure that a larger number of participants are recruited to each group.Increasing the speed of recruitment may provide a solution.This study had a number of limitations.The first was a limitation of randomising at the level of the individual, which is the potential for contamination between groups: participants in the active conditions discussing the content of the intervention with the WLC.There is no evidence of contamination at T2 but there is some evidence that between group conversations had taken place at T3.A second limitation was the generalisability of our findings: participants recruited to this study were volunteers who had increased levels of stress, and were predominantly well educated females working in social care or the knowledge industry in senior manager or administrator roles, this is not representative of the general workforce.There is a strong need for future research on occupational digital mental health interventions to target industries and occupations that are traditionally under represented in these studies, this includes employees working in blue-collar roles.Only two of the three participating organisations were able to provide demographic data to make a comparison between their workforce, and employees recruited to the study.This information was further limited by a difference between the metrics used by organisations and the metrics used in this study.Future research should work with organisations to collect comparable demographic data so that a better comparison can be made between the workforce and study participants.A third limitation was the recruitment of a targeted population: participants with elevated levels of stress.Targeting these interventions towards individuals who are perceived to be experiencing stress may add to the stigma of mental health programs impacting on reach and up-take.Future studies may wish to evaluate similar programs with universal populations.Fourthly, some of the measures used in this study were developed or adapted for the study, and were found to have relatively low reliability, which may impact on the strength of our findings.Fifthly, a failure in randomisation in the occupational groups could have affected the outcomes; we would expect a larger study to correct that.Sixthly, the measures of engagement used in this study were confined to measures of exposure future studies of occupational digital mental health interventions may wish to utilise more comprehensive measures of program engagement.Finally, we recognise the limitations of generalising conclusions from this pilot study and would suggest caution in interpreting our findings.The findings of this study suggest that access to an online facilitated discussion group increases engagement with a minimally support occupational digital mental health intervention but that this increase does not necessarily result in improved psychological outcomes or increased satisfaction when compared to access to the CBT based stress management intervention on its own.Access to the stress management program resulted in lower levels of depression, anxiety and stress and an increase in comfort and enthusiasm post intervention that were largely maintained at follow-up.SC is the founder of WorkGuru, and continues to have a commercial interest in the company.SC is the principal investigator for the study.SC and KC conceptualised the initial trial design.This was developed with the help of PRH and KG.SC drafted the manuscript and conducted the data analysis.KC, PRH and KG provided feedback and contributed to the final version of the manuscript.All authors have read and approved the manuscript.
Introduction Rates of work-related stress, depression and anxiety are high, resulting in reduced work performance and absenteeism. There is evidence that digital mental health interventions delivered in the workplace are an effective way of treating these conditions, but intervention engagement and adherence remain a challenge. Providing guidance can lead to greater engagement and adherence; an online facilitated discussion group may be one way of providing that guidance in a time efficient way. This study compares engagement with a minimally guided digital mental health program (WorkGuru) delivered in the workplace with a discussion group (DG) and without a discussion group (MSG), and with a wait list control (WLC); it was conducted as a pilot phase of a definitive trial. Methods Eighty four individuals with elevated levels of stress from six organisations were recruited to the study and randomised to one of two active conditions (DG or MSG) or a WLC. The program WorkGuru is a CBT based, eight-week stress management intervention that is delivered with minimal guidance from a coach. Data was collected at baseline, post–intervention and at 16-week follow-up via online questionnaires. The primary outcome measure was number of logins. Secondary measures included further engagement measures, and measures of depression, anxiety, stress, comfort and enthusiasm. Quality measures including satisfaction and system usability were also collected. Results A greater number of logins was observed for the DG compared with the MSG; this was a medium between group effect size (d = 0.51; 95% CI: − 0.04, 1.05). Small to medium effect size differences were found at T2 in favour of the active conditions compared with the control on the DASS subscales depression, anxiety and stress, and the IWP subscales enthusiasm and comfort. This was largely maintained at T3. Satisfaction with the intervention was high with individuals in the MSG reporting greater satisfaction than individuals in the DG. Conclusions This study shows that access to an online facilitated discussion group increases engagement with a minimally supported occupational digital mental health intervention (as defined by the number of logins), but that this doesn't necessarily result in improved psychological outcomes or increased satisfaction when compared to access to the intervention without the group. Access to the web-based program was associated with lower levels of depression, anxiety and stress and an increase in comfort and enthusiasm post intervention; these changes were largely maintained at follow-up. Trial registration This trial was registered with ClinicalTrials.gov on March the 18th 2016 NCT02729987 (website link https://clinicaltrials.gov/ct2/show/NCT02729987?term=NCT02729987&rank=1).
494
Environmental considerations for impact and preservation reference zones for deep-sea polymetallic nodule mining
Deep-sea mining activities are being proposed in national and international waters, focusing on three main resource types: polymetallic nodules, seafloor massive sulphides and cobalt-rich crusts.Mining interest for nodules is mostly centred in the Clarion Clipperton Zone of the northern equatorial Pacific, SMS on active plate boundaries, and crust mining on seamounts, principally in the northwest Pacific.Whilst these three types of minerals will each require bespoke technology and approaches, they share in common the potential to cause serious harm to the marine environment .In the case of nodule mining, the footprint will be large, on the scale of hundreds of square kilometres of seafloor each year .The spatial footprint of SMS and crust mining will be smaller but still ecologically significant .Seabed mining activities for different mineral resources at shallower depths will not be covered here, but it is worth noting that some large operations exist.Within national jurisdiction, some deep-sea SMS mining operations have been approved to date including in Papua New Guinea and in Japan , though they have not yet gone into commercial production.No deep-seabed mining in the legal “Area” beyond national jurisdiction has yet been approved, and the environmental regulations and approval process for commercial DSM exploitation are still under development by the International Seabed Authority.The detailed requirements for environmental monitoring of commercial DSM are likewise still in development.The mining of deep-ocean minerals, like any form of human development, will impact the surrounding environment and biological communities.The mining vehicle is likely to disturb the sediment in wide tracks , which will likely remove most organisms.Noise and light pollution from the mining machinery and support vessels will impact biological communities from the sea surface to the deep-ocean floor .Sediment plumes created by the seabed mining operation will spread in the water column and eventually settle on the seafloor, smothering any fauna in both the directly disturbed area and surroundings .Sediment plumes may also arise from the surface de-watering operation and will likely be discharged at depth .Models suggest that large sediment plumes will be created that spread over extensive areas, particularly in the case of nodule mining on fine-grained abyssal sediments .It is estimated that the sediment plume will cover at least twice the area of the operation .As an input to the ongoing development of ISA environmental rules and regulations, this paper outlines key considerations relevant to the design and selection of sites to monitor impacts of DSM.Though good design principles remain relevant regardless of the effect being measured, not all possible effects are considered here.This paper takes into account existing ISA guidance, where available, as well as some of the issues raised during workshops in 2015, 2016 and 2017.For the purposes of this paper, environmental management of DSM shall be understood to be a mechanism to minimise direct and indirect damage of mining-related activities to the marine organisms, habitat, and ecology of the region.To achieve these ends, it is necessary to avoid / minimise the negative impacts where possible, which in turn requires a level of monitoring such that impacts are readily detectable and assessable, before they cause serious harm.For those places where impacts have occurred, physical, biological and ecological recovery will also need to be monitored.Establishing an effective monitoring regime requires understanding the distribution of the parameters of interest in a region before mining commences, and hence detailed baseline surveys of the mining areas are first needed, before the monitoring and mitigation plans can be developed.The underlying concepts for spatial management zones are similar for all types of mining.However, there are differences in considerations concerning the scale, spatial constraints, and ecology of these areas.The biological communities associated with active SMS deposits, for example, are very different from those in nodule fields, with the former being isolated areas with relatively high densities of fauna but relatively low diversities , whilst the latter are the opposite .Crusts and inactive SMS deposits are associated with typically diverse communities particularly of sessile suspension feeders , and unlike the other DSM resources, crusts may also be associated with commercial fish species .As a result of these and other critical differences, design of the monitoring regimes for each of the DSM resource types will necessarily differ in many aspects.We focus here on polymetallic nodule deposits.However, many of the underlying design criteria which shape decisions on monitoring, as discussed below, will be similar across all deposits.The legal and regulatory requirements for environmental monitoring of deep-sea mining will likely be the most important factor controlling what is done.The ISA provides some information on spatial management at two scales: at the scale of individual mining claims and at a regional scale.A regional environmental management plan has been developed for the CCZ , which sets out a range of representative areas for the region to be protected from mining activities.The APEIs are important to regional-scale management but are not necessarily part of the claim-scale monitoring scheme and so are not covered in detail here.The ISA does provide guidance on claim-scale spatial management for all types of mining in the current “mining code” , which provide an important approach for addressing several key monitoring objectives.In this context, the term “mining code” refers to the collection of rules, regulations and guidance concerning DSM.The mining code currently applies only to exploration activities, and sets out two types of spatial environmental management zones within the mining claim area for assessing mining activities: impact reference zones and preservation reference zones.These are defined as follows:"IRZ are areas to be used for assessing the effect of each contractor's activities in the Area on the marine environment and which are representative of the environmental characteristics of the Area.PRZ means areas in which no mining shall occur to ensure representative and stable biota of the seabed in order to assess any changes in the flora and fauna of the marine environment.The draft exploitation code and environmental regulations do not yet provide guidance for the implementation of PRZ or IRZ.The environmental management plan for the CCZ provides some additional information:Contractors will provide in their environmental management plans the designation of the required impact and preservation reference zones for the primary purposes of ensuring preservation and facilitating monitoring of biological communities impacted by mining activities.Impact reference zones should be designated to be within the seabed claim area actually mined.Preservation reference zones should be designated to include some occurrence of polymetallic nodules in order to be as ecologically similar as possible to the impact zone, and to be removed from potential mining impacts;,Contractors are required to minimise potential impacts on established preservation zones, and the Authority should consider the potential for impact on established preservation zones in evaluating any application for a mining license.Fig. 1 provides a plausible graphical representation of these zones within a nodule mining claim.The PRZ is principally a ‘control’ site for the IRZs, which measure impacts.However, being located closer to the claim area than the APEIs, the PRZs could also play important roles for conservation, for example providing connectivity as ‘stepping stones’ and sources for recolonization for impacted sites.However, to fulfil a conservation objective the PRZ would need to be in place for the long term and not mined.In both conservation and monitoring roles, PRZs will need to be representative of mined habitats and protected from the primary and secondary effects of mining activities.However, their contribution towards meeting conservation objectives, as part of a potential representative network of protected areas, is not the focus of this paper, which looks at monitoring.There are many practical problems in the detection of anthropogenic impacts on biological communities , particularly in the deep sea.Furthermore, deep-sea environments associated with DSM differ from shallower habitats in several important ways, which affect both statistical confidence and power, and will vary by resource type.Many communities have large natural temporal variances in the populations of many species .These populations often show a marked lack of concordance in their temporal trajectories from one species and one place to another .This problem is further exacerbated in many deep-sea areas by low densities of fauna and high diversities , although this may not be the case in active venting systems or seamounts .Sampling must therefore be of sufficient size and replication to identify unusual patterns of change in suites of interacting and variable measurements often spanning considerable distances .Furthermore, the first monitoring samples should be completed prior to mining starting to provide an appropriate baseline.When these factors are taken into account and applied to mining a range of considerations become apparent, which are explored below and summarised in Fig. 2.Size is a fundamental characteristic of spatial management zones.Conservation considerations aside, zones need to be sufficiently large to contain a representative subset of organisms sufficient for a statistically robust assessment of ecosystem integrity.Robust assessment of biological assemblages requires enumeration of hundreds of individuals from as many species as possible .Additionally, sites will need to be large enough to allow regular and repeated destructive sampling over a long period, likely decades, without any impact from sampling being detected.Densities of some organisms, particularly larger-sized animals, are very low, especially in nodule areas.In the case of megafauna in nodule fields especially, representative sampling may require assessment of transects kilometres in length .Depending on the effect size being measured, the variances of the indicator under investigation and the statistical power desired, anywhere from 25 to > 100 replicates may be necessary .These will need to be contained within the zone.In line with the precautionary approach it will be necessary to design zones based on precautionary assumptions.While default minimum sizes of protected areas are typically specified by the regulator, other more flexible science-based approaches for determining appropriate size could be taken, assuming the capacity exists to assemble the relevant local data and to assess local populations and their reproductive potential.While science-based local assessments increase the likelihood of effectiveness , they do come with greater research costs.Thus, a management system could begin with a precautionary size of a PRZ as a default position, which could be modified as more data become available suggesting what minimum dimensions might be required to achieve the objectives of viability, representability and resilience to sampling impact.Finally, given the expected long duration of the monitoring, PRZs will need to be large enough to self-support populations of the species being monitored.Otherwise, reductions in the populations of species in the PRZ because of insufficient recruitment could be confused with natural declines in the region that were caused by other factors, such as climate change.Spatial management zones will need to have sufficient geographical distance from mining to ensure that the PRZs are not impacted by mining activities, and the IRZs are affected by a meaningful range of affects.However, environmental heterogeneity tends to increase with spatial scale , so zones closer together are likely to be more representative of each other.Thus, both types of zones will need to be close enough to each other and to the mining activities to ensure that they represent reasonable examples of impact and control treatments.Given the currently unknown behaviour of mining plumes, the question of appropriate spacing is particularly difficult, and will therefore require taking an adaptive approach for each of the resource types, to measure the varying impacts of the plumes over distance, and to control for them.It may be necessary to place IRZ at multiple distances away from the mining impact to evaluate the gradient of disturbance and its impacts.The monitoring design and schedule should be able to reliably detect the impacts of ongoing mining activities by comparing the state of the ecosystem subjected to mining with the state of the ecosystem that would have existed if mining had never occurred.This requires an approach that can reliably estimate the effect of mining activities amid the diverse sources of spatial and temporal variability in the deep sea, which in turn requires data to be collected before the mining has occurred , during, and at multiple points after the mining .Spatiotemporal variation is addressed, in part, by repeated sampling through time and at multiple sites .Many impacts will lead to step-changes in the ecosystem after mining, which are easier to detect .However, some impacts from mining, particularly secondary impacts, may not cause immediate or constant changes to a system.Indeed, complex ecological interactions may take time to propagate through the system, leading to time-dependent effects of disturbance .Monitoring needs to be able to detect these changes.It should also be able to detect combined or cumulative effects of other environmental changes and attribute these to a cause or causes.Regular monitoring is also important to provide the information necessary for responsive adaptive management .A statistically robust sampling programme should be implemented to consider the points raised here .The robustness of the plan should be tested and scrutinised prior to sampling by statistical experts familiar with working in the deep sea.Baseline data collected at the claim sites should be sufficient to allow for estimations of population means and variance in the indicator of interest required for a formal power analysis of a sampling plan.Three variables are of relevance here: significance, power, and effect size.Typically, a significance level of 5% and a power of 80% are selected, but arbitrary convention may not be appropriate for some questions.Measuring smaller effect sizes will require more replicates than larger ones, and hence choosing an appropriate value beforehand is necessary.There are few conventions concerning effect levels to be measured, and will indeed be heavily dependent on the particular indicator.However, effect levels greater than one standard deviation are likely to already fall within the legal realm of ‘serious harm’ to the environment.Thus, it is expected that measuring an effect level less than one standard deviation will be necessary, where an effect size of 0.5 is that the mean value of the indicator in the PRZ is 0.5 standard deviations smaller than that in the IRZ.Given the multiple considerations and complexities involved, a system of peer-review or independent verification of sampling designs would help ensure that the design was robust prior to commencement of an expensive sampling programme.Marine data acquisition in other industries, such as oil and gas, has generally become formulaic and focused on the known impacts and effects of those industries, in part to meet existing legal and commercial drivers .In moving into the deep sea environment this approach has revealed problems in terms of robustness of data for understanding impacts to deep-sea ecosystems .As a result, it will be insufficient to solely rely on shallower water protocols; rather, these will need to be revised for the deep-sea to provide the necessary statistical power for measuring the relevant deep-sea ecological indicators.Monitoring multiple examples of each type of zone enhances the statistical power to detect effects, and thus in the deep-sea where statistical power is usually an issue, it should be assumed that multiple replicates of PRZs and IRZs will need to be established.The comparison between a single impact and a single control location is confounded by any non-mining-related temporal ecological variation.For example, populations often have different temporal trajectories in different locations, and temporal interaction among places is also common .Multiple control sites are also necessary to detect disturbances that do not affect long-term mean abundances of a population, but, instead, alter the temporal pattern of variance of abundance .To be effective, the location of zones should be defined as soon as possible in the mining process, at least in the preparation of the environmental impact assessment , but preferably in the initial planning stages .However, at the start of a mining project there will be some uncertainty in the exact spatial and temporal pattern of mining activities.This may lead to inappropriate zones being defined, for example if IRZs are unsuited to mining, or mine plans change around designated zones.Furthermore, it is likely that some areas defined as PRZs or IRZs may turn out, after further monitoring, to be unrepresentative.Also, some PRZs may turn out to be too close to mining activities and will become impacted.These could be re-designated as IRZs; others may have to be retired.These changes in status of designated areas could be particularly significant, both scientifically and economically, if unreplicated PRZs are impacted, as that this could require operational modifications or reductions in the planned mining area or movement of mining into less valuable resource areas.Finally, natural small-scale episodic events may occur in some areas reducing their value for monitoring, particularly in the case of highly dynamic SMS vent ecosystems."For all these reasons, increasing redundancy through designation of multiple sites is strongly suggested in order to mitigate a range of potential problems and allow for flexibility and adaptability in both the contractor's mining activities and the monitoring plan.Recent research illustrates the importance of nodules for abyssal biodiversity.Likewise, the minable resource may also be important for biodiversity associated with SMS and crust deposits .Ecological communities likely also respond to finer-scale environmental variation, such as variation in local geomorphology .Consequently, to fulfil the obligation of representativeness of the PRZ it will be necessary to demonstrate that the PRZs contain similar ecological and geomorphic features as the planned mining area, which in the case of nodule mining will mean a similar density and size of nodules.Thus, “including some occurrence of polymetallic nodules” is likely an insufficient criterion for a PRZ, which is “to be as ecologically similar as possible to the impact zone” .Consideration should also be given to PRZs having other environmental traits that are the same as those sites suitable for mining, as these traits may also affect ecological community structure.For example, having limited slope and rugosity in the case of nodule mining, particularly as variation in seabed structure is known to affect communities in abyssal plains .Once suitable areas have been identified, the IRZs and PRZs should be selected at random within those areas for each habitat type .Outside of the mined habitats, it is likely that other habitats, including ecologically or biologically significant areas may exist within the claim zones and that these may be impacted from mining activities through plume and other effects.These habitats could, for example, include areas unsuitable for mining because of geomorphological features or lack of resource, and areas with significant aggregations of habitat-forming organisms.To understand the full impacts of mining, it will be necessary to identify these ecologically important areas prior to mining and also include them in any monitoring programme.In addition, as discussed above, it will be necessary for statistical purposes to ensure that representative portions of all local habitats are spared from mining impacts.These may require an additional sub-class of PRZs to be recognised for each special feature and habitat type.Sediment and chemical plumes from mining disturbance are likely to be an important impact from DSM with potentially far-reaching and damaging impacts with expected negative consequences to both benthic and pelagic deep-water communities.The geographic location of plume impacts will almost certainly include the mined area, but may extend to be several times the spatial extent of the mining activities themselves .Sediment plumes may also fall onto future mining blocks, where they will later become re-suspended and re-distributed further, thus amplifying their impacts.The impact of plumes from DSM activities is poorly known, despite several experiments designed to simulate them .Environmental management of current and future activities will require that impacts from plumes are measured and understood.Thus, some impact reference zones will need to be designed specifically for the effects of plumes.Here, to differentiate these from other IRZs, we add a ‘P’ to the designation IRZ-P.IRZ-Ps would be situated in an area that is representative of the mined area that is not mined but is expected to receive significant impacts from the sediment plume.A gradient of plume-related impacts will need to be evaluated.It will be necessary, but could prove to be difficult, for the contractor to provide evidence that the designated PRZs are not affected by the impacts of plumes and sediment deposition, bearing in mind that sub-lethal long-term effects of low levels of increased sedimentation are currently unknown.If a PRZ is found to be impacted, it will no longer be able to fulfil its role as defined in the mining code of being able to detect changes in the mined area.Initial modelling results suggest that plume impacts will extend over areas several times larger than the mined area, particularly if the locations of low levels of additional suspended materials are assessed.Whilst the direct impacts of mining are difficult to mitigate, the secondary impacts caused by plumes, noise, etc., are mitigatable and should be the focus of management measures to minimise such disturbance.The spatial distribution of plume impacts is a function of four components: i) engineering: the type of mining machinery in use – how deeply it digs, how finely it grinds up the raw ore, how it moves along the seabed, and how it discharges its “exhaust” of unwanted sediments; ii) geology: the quantity and nature of the ore, as well as the associated seabed sediments, how long they stay in suspension and the amount of dissolution of elements; iii) hydrography: the strength, direction, and variability of local eddies and currents at the time of mining ; and, iv) the duration of the mining activities.To effectively predict the spatial extent of the plume and hence set effective spatial management zones, models that use realistic data for all four components will be necessary.Likewise, mitigation measures to keep the extent of plumes to minimum could focus on these four components, exploring engineering solutions in concert with geological and hydrological site selection criteria, where they are least likely to have lasting impacts.Research is required to determine the levels of suspended sediment or chemical concentrations that are not acceptable in a PRZ.This will likely be set at a threshold where either sediment cannot be detected or where it has been shown to have negligible effects on deep-water organisms.This threshold could also be used for monitoring and enforcement, but may take several years of careful monitoring to establish.In the meantime, a precautionary value will need to be selected.It is very possible that some of the impacts of mining will extend beyond the boundaries of the contractual / license area, particularly the impacts of plumes.This would require monitoring outside of what a contractor may be willing, obliged or allowed to provide.Alternatively, it could mean that the contractor could not mine up to the boundary of their area.These concerns are particularly relevant to mining activities generating large plumes, those at the edge of claim zones, and in small sized or irregularly shaped claims with a greater edge-to-area ratio and hence increased chances of edge-related effects spilling over into neighbouring areas.The likelihood of these concerns are increased if contractors give up parts of their exploration area when they move to exploitation.If it should happen that these impacts extended to a neighbouring contractual / license area held by another State Party, they may present diplomatic as well as liability challenges.Likewise, if plumes were to fall onto unclaimed areas there could be questions both concerning the liability and also who would pay to monitor these areas.Finally, plumes that fell within national jurisdiction would likely trigger environmental liability based on existing international environmental jurisprudence , which again would need to be expanded to take into account the unique legal specifics of DSM.In all cases, to determine the legal ramifications, having a robust monitoring programme in place will be necessary to: i) detect such trans-boundary effects as they occur, and ii) to determine if these effects are likely to cause serious harm to the environment.The first point suggests that monitoring outside of claim blocks will be necessary when spillover is likely.The second point suggests that such monitoring would have to be factored in before mining commences; i.e. the appropriateness of trans-boundary monitoring should be a consideration in the monitoring plan from the outset.In an increasingly crowded ocean, the zoning of PRZs and IRZs will ultimately require integration into wider spatial planning and management.Maps and coordinates of zones should be made public.Additionally, they could be communicated to secretariats of other relevant international maritime bodies to better ensure they are taken into account.However, in cases where there are other potentially conflicting activities, it is unlikely that notification alone will be sufficient, and cooperative mechanisms will need to be developed .Additionally, the PRZs should be included in international databases of protected areas and take into consideration other international designations.It is conceivable that contractors may want to share PRZs along a common boundary of their claim areas.This offers the possibilities of cost and effort savings as well as a way to carry out more intensive monitoring.Combining the financial resources for monitoring by two contractors may allow for the installation of ambitious and novel monitoring equipment, such as seafloor observatories .Seafloor observatories could provide real-time data to enhance day-to-day environmental management, for example detecting peak current events, during which mining could be avoided because plumes generated would be widely dispersed.Combining PRZs also has the advantage of ensuring monitoring approaches are the same between two contractors, although coordination of monitoring activities around two independent mining developments may be difficult.A trans-boundary PRZ should be part of a wider array.Monitoring just one PRZ for two contract areas is not being suggested here, and would have several disadvantages: 1) it reduces replication, which would leave monitoring more vulnerable to the many possibilities of technical and ecological uncertainty; and, 2) it also reduces the overall spatial sampling carried out in the mining areas, with subsequent reductions in the amount of information available for the regulator for regional planning and understanding.Therefore, whilst cost-saving and cooperation among contractors should be encouraged, it should not be employed to replace rigorous sampling and replicates.For mining using new and developing techniques, it should be advantageous for independent observers or verification agencies to be used to help ensure the independence and robustness of results.Transparency, particularly in the nascent stages of this new industry, will be important in developing shared good practices and building trust .Sampling plans would be made publicly available for external scrutiny, in addition to peer review, prior to sampling.Making subsequent data and metadata, analysis and interpretations publicly available will also help improve accountability and credibility of the results from this new and emerging industry.When setting up a monitoring scheme for a given mining block within a contractual / lease area, three steps might be considered: 1) beginning with more PRZs than will ultimately be used in long-term monitoring to ensure statistical robustness as well as redundancy given various uncertainties; 2) re-designating some PRZs that are affected by mining into IRZs; and 3) learning from the current situation and the future plans for mining to set up a new array of PRZs/IRZs an appropriate time in advance in order to acquire the necessary baseline information.In this scheme, there would be three activities operating in parallel within a contractual area: i) active mining and monitoring; ii) baseline monitoring at the next block in anticipation of mining; and iii) surveying / selecting the subsequent mining block after the one currently being monitored for baseline information.Flexible iterative management, as suggested here, allows for learning and adapting through experience, and could prevent delays resulting from inadequate or unsuitable baseline or monitoring data, whilst providing the Contractor a stepwise investment strategy, rather than having to put in place a full monitoring system from the outset.However, such flexibility is only possible if the contractual / licensing scheme allows for regular review and revision of Plans of Work.The ISA contractual system currently in place for exploration has very limited flexibility of this sort, and Plans of Work for exploration have seldom been modified over the course of their 20-year life spans.The latest draft of the exploitation regulations proposes separate Environmental Regulations, which are not yet completed.The ISA states that guidelines are needed for establishment of IRZ and PRZ, which will feed into the Environmental Regulations.Establishing scientifically realistic and effective guidelines for spatial management zones should in turn inform the development of effective rules and regulations.Using existing experimental design guidance as a starting point, this paper has added considerations particularly relevant to the deep-sea and DSM, in order to formulate recommendations for establishment of PRZs and IRZs.Although focused on mining activities in areas beyond national jurisdiction, the recommendations presented here would be applicable and useful to the design of spatial environmental management zones in national waters.PRZ and IRZ in crust, SMS and pelagic environments present additional challenges to those presented here for nodule systems.Additional critical thinking in collaboration with a wide variety of experts is necessary for appropriate mechanisms for establishment to be developed.
Development of guidance for environmental management of the deep-sea mining industry is important as contractors plan to move from exploration to exploitation activities. Two priorities for environmental management are monitoring and mitigating the impacts and effects of activities. International regulation of deep-sea mining activities stipulates the creation of two types of zones for local monitoring within a claim, impact reference zones (IRZ) and preservation reference zones (PRZ). The approach used for allocating and assessing these zones will affect what impacts can be measured, and hence taken into account and managed. This paper recommends key considerations for establishing these reference zones for polymetallic nodule mining. We recommend that zones should be suitably large (Recommendation 1) and have sufficient separation (R2) to allow for repeat monitoring of representative impacted and control sites. Zones should be objectively defined following best-practice and statistically robust approaches (R3). This will include the designation of multiple PRZ and IRZ (R4) for each claim. PRZs should be representative of the mined area, and thus should contain high -quality resource (R5) but PRZs in other habitats could also be valuable (R6). Sediment plumes will influence design of PRZ and may need additional IRZ to monitor their effects (R7), which may extend beyond the boundaries of a claim (R8). The impacts of other expected changes should be taken into account (R9). Sharing PRZ design, placement, and monitoring could be considered amongst adjacent claims (R10). Monitoring should be independently verified to enhance public trust and stakeholder support (R11).
495
BCL9L Dysfunction Impairs Caspase-2 Expression Permitting Aneuploidy Tolerance in Colorectal Cancer
This comprehensive genomic analysis of aneuploid colorectal cancer identified frequent mutations and deletions of BCL9L leading to caspase-2 dysfunction and the tolerance of chromosome missegregation, which operates independently of TP53 status.These data support the existence of parallel pathways complementing TP53 dysfunction in the tolerance of aneuploidy and the central role for caspase-2 in the stabilization of p53 following chromosome missegregation events.Emerging evidence supports the influence of intratumor heterogeneity on patient outcome and drug response.Genomic instability is frequently observed in cancer, driving intercellular variation and subsequent intratumor heterogeneity, providing the substrate for selection and tumor evolution.Chromosomal instability is a form of genome instability characterized by the ongoing disorder of chromosome number and/or structure.Numerical CIN occurs after whole chromosome missegregation due to mitotic defects and results in an aberrant chromosome number, known as aneuploidy.Structural CIN results in the disordered integrity of parts of chromosomes.Both types of CIN are interconnected: missegregated chromosomes are exposed to mitotic stress that generates structural CIN while changes in chromosome structure render them susceptible to missegregation.Since chromosome segregation errors are poorly tolerated by diploid cells, survival mechanisms, termed aneuploidy tolerance, are crucial for the propagation of aneuploidy in tumors.Mutations in TP53 and buffering of protein changes due to aneuploidy have been proposed as candidate mechanisms of aneuploidy tolerance.Due to the potential clinical benefit of limiting CIN in tumors, further efforts to elucidate these survival mechanisms might contribute to limiting this driver of heterogeneity.Colorectal cancer can be broadly divided into microsatellite-instability high and microsatellite-stable tumors.MSI CRC tumors remain near diploid, whereas MSS tumors develop a wide range of aneuploid karyotypes and CIN.TP53 mutations occur frequently in aneuploid tumors; however, next-generation sequencing efforts have not specifically explored the somatic mutational landscapes of aneuploid versus diploid MSS CRC tumors to identify determinants of CIN.In this study, we aimed to identify somatic mutations enriched in aneuploid CRC and to elucidate the potential role of these mutations in the development of CIN in CRC.We selected a cohort of 17 MSS colorectal adenocarcinomas and eight MSS aneuploid cell lines for whole-exome sequencing.Ploidy status was deduced by calculation of the DNA index using DNA image cytometry data obtained from nuclei isolated from paraffin-embedded specimens.DI was calculated as the ratio between the mode of the relative DNA content of observable tumor nuclei peaks and a diploid control consisting of nuclei from infiltrated fibroblasts, endothelial cells, and immune cells.To validate the DNA image cytometry results, we performed centromeric fluorescence in situ hybridization for chromosomes 2 and 15, since these chromosomes are not frequently subject to whole chromosome gains or losses in CRC.The overall distribution of centromeric signals in tumor cells was compared with the normal adjacent tissue.By convention, a tumor was classified as aneuploid when an aneuploid peak was detected by DNA image cytometry or when significant changes in the distribution of centromeric signals were detected for one of the chromosomes tested.A tumor was classified as diploid when no aneuploid populations were detected by DNA image cytometry and no significant changes were detected in the distribution of centromeric signals for the two chromosomes tested.We detected modal chromosome signals different from 2 for at least one chromosome in samples with aneuploid peaks.For the MSS cell lines, ploidy status was obtained from published karyotyping and SNP array analysis.Aneuploid CRC is strongly associated with CIN defined by cell-to-cell variation of centromeric signals and surrogate parameters that measure karyotypic complexity in cancer genomes such as the weighted genome instability index, which assesses the fraction of the genome with alterations.We observed that both modal centromere deviation for chromosomes 2 and 15 and the wGII were significantly higher in aneuploid tumors.In samples with DI = 1, the modal centromere signal was 2 in all cases except tumor 395, which showed significant alterations in the overall distribution of chromosome 15 signals and was therefore classified as aneuploid.This classification was also supported by a high wGII.No significant differences were observed in tumor sample purity between aneuploid and diploid tumors.Taken together, ten MSS tumors were classified as aneuploid and seven as diploid.To attempt to identify aneuploidy-specific mutations, we performed exome sequencing on DNA from tumors, normal adjacent tissue, and cell lines, and mutation calling of tumor somatic variants was performed by filtering germline variants identified in normal adjacent colon.Manual curation of variant calls and validation by Sanger sequencing revealed a list of 32 genes specifically mutated in aneuploid samples.Notably, known CRC drivers did not segregate according to tumor ploidy status."As expected, however, TP53 mutations were significantly enriched in aneuploid tumors.No somatic mutations were mutually exclusive with TP53 in this discovery cohort.BCL9L was the only gene for which all mutations found were clearly inactivating, with one nonsense mutation Q713∗ in tumor 379, one nonsense mutation R716∗ in the cell line SW1463, and one splice-site variant in the tumor sample 363.We also found loss of heterozygosity at the BCL9L locus in two aneuploid tumors and one cell line.R716∗ was observed in two of four alleles of the aneuploid cell line SW1463.Q713∗ and the splice-site mutation were observed in two of three alleles of tumor samples 379 and 363, respectively, which suggests that BCL9L mutations occurred early, prior to chromosome duplication.Next, we performed two functional RNAi screens probing phenotypes relevant to chromosome segregation errors and their tolerance.We used the diploid cell line HCT-116 due to its low level of constitutive chromosome segregation errors relative to CIN CRC cell lines and its poor tolerance of drug-induced segregation errors.In the first RNAi screen we silenced each of the 32 genes mutated in aneuploid samples and examined the consequence upon chromosome segregation error frequency as previously described, with no significant results.Second, we performed a screen to detect tolerance of chromosome segregation errors.Chromosome missegregation induces p53-mediated cell cycle arrest in the next G1 phase, often followed by apoptosis, thereby preventing propagation of aneuploid progeny.Chromosome missegregation can be artificially induced in HCT-116 cells with reversine, an Mps1 inhibitor that impairs the spindle assembly checkpoint resulting in chromosomal non-disjunction and missegregation.Consistent with results by Jemaa et al., we found that reversine treatment induces subsequent arrest or cell death.We depleted all aneuploid-specific genes individually in HCT-116 cells with small interfering RNA pools from Dharmacon in the presence or absence of 250 nM reversine, a concentration that does not inhibit Aurora kinase B.Silencing of TP53, RABGAP1, BCL9L, HDLBP, and ZFHX3 induced reversine tolerance.We considered a gene validated when at least three of four individual oligonucleotides showed the same effect as the pool.TP53, BCL9L, and ZFHX3 were validated following these criteria.Experiments with distinct Qiagen siRNA pools also showed a similar result for TP53, BCL9L, and ZFHX3.Efficient depletion of the BCL9L protein was observed for all four single oligonucleotides; however, we discarded the BCL9L oligonucleotide 4 due to high cellular toxicity.Finally, expression of a BCL9L-EGFP construct lacking the 3′ UTR region reverted the survival phenotype in reversine when an siRNA targeting 3′ UTR was transfected, further supporting that aneuploidy tolerance observed with various siRNA duplexes is due to on-target silencing of BCL9L.BCL9L silencing also increased cell viability following treatment with 200 nM aphidicolin, which causes replication stress and induces segregation errors of structurally unstable chromosomes.No tolerance effect was observed when HCT-116 cells were treated with doxorubicin, suggesting that silencing of BCL9L does not result in general resistance to cytotoxics causing DNA damage.Given the aneuploid-specific pattern of LOH and truncating events of BCL9L, the second most commonly truncated gene in aneuploid CRC after TP53 in our discovery cohort, and its putative aneuploidy tolerance function in the siRNA screen, we investigated BCL9L somatic events in independent cohorts.We analyzed data from 186 MSS CRCs available from The Cancer Genome Atlas.ZFHX3 was not investigated further as mutations in this gene were not enriched in aneuploid CRC in validation cohorts.We confirmed that samples with somatic copy-number loss of BCL9L had significantly lower gene expression compared with samples with no alterations in BCL9L.Using the wGII score as a surrogate of chromosomal instability and aneuploidy in CRC as previously described, we observed significantly higher wGII scores in tumors harboring BCL9L mutation or copy-number loss compared with tumors with no alteration in BCL9L.This relationship remained significant when controlling for the higher probability of gene loss in high-wGII tumors.The majority of BCL9L deletions and mutations co-occurred with TP53 mutations.In a similar computational permutation analysis as performed above, samples with co-occurring BCL9L and TP53 alterations displayed higher wGII scores compared with those with mutually exclusive alterations, suggesting that these two genes might cooperate as aneuploidy suppressors in CRC.Comprehensive genomic analysis from the same colorectal TCGA cohort enabled us to infer the genotype of BCL9L alterations in MSS CRC.In total, BCL9L mutations and/or deletions occurred in 14% of MSS CRC, the majority of which retained a wild-type copy of BCL9L while biallelic alterations of BCL9L occurred in only three samples.Taken together, these results suggest a haploinsufficient model of tumor suppression for BCL9L.Finally, we evaluated the pattern of BCL9L non-synonymous mutations in MSS CRC across the BCL9L protein.We compiled all BCL9L mutation data from the discovery cohort in Figure 1A, the TCGA MSS CRC cohort, a large cohort of 438 MSS CRC published by Giannakis et al. together with two additional validation cohorts of MSS CRC tumors sequenced by Ion Torrent targeted sequencing.The 27 BCL9L somatic mutations identified were scattered across the gene with one cluster of four missense mutations mapping to two adjacent residues within the β-catenin binding HD2 domain.Thirty-seven percent of the somatic mutations were truncating events whereas 17 of 27 were missense mutations.This characteristic profile of scattered mutations with >20% of inactivating/truncating mutations is consistent with the tumor-suppressor pattern proposed by Vogelstein et al.Consistent with these data, comprehensive computational analysis has classified BCL9L as a candidate driver gene in a pan-cancer analysis and as a significantly mutated driver gene in MSS CRC.The results shown above prompted us to carry out a more detailed study of the role of BCL9L dysfunction in aneuploidy tolerance.The diploid cell line HCT-116 expresses high levels of BCL9L, and siRNA transfection efficiently depleted BCL9L protein and mRNA.BCL9L silencing increased the number of metabolically active cells, total cell number, bromodeoxyuridine-incorporating cells, and colony-forming efficiency in reversine-treated HCT-116 cells and reduced reversine-induced apoptosis.In the absence of reversine, BCL9L silencing did not induce any significant changes in cell proliferation or apoptosis, nor did it affect the rate of constitutive segregation errors.BCL9L knockdown also induced reversine tolerance in a panel of near-diploid colorectal cell lines that express BCL9L.In contrast, survival of BCL9L mutant and/or non-expressing cells in reversine was not improved after BCL9L silencing, suggesting on-target specificity of the BCL9L siRNAs.Next, we examined the fate of daughter cells arising from error-free mitoses or mitoses with naturally occurring segregation errors.For this we used HCT-116 cells expressing H2B-RFP to visualize chromosomes.Following control siRNA transfection, the majority of daughter cells that had undergone a chromosome segregation error did not divide again within 48 hr.Longer-term observation of cells that had undergone a chromosome segregation error revealed that 24.5% of arrested cells died between 48 and 72 hr after mitosis.In contrast, the majority of daughter cells entered a second mitosis following silencing of BCL9L or TP53, whether an endogenous segregation error occurred or not.Similar results were found through live-cell microscopy of three additional cell lines.These results suggest that BCL9L dysfunction promotes survival following chromosome segregation errors by a mechanism that may not be unique to Mps1 inhibition but a more general mechanism that also applies to endogenous chromosome segregation errors.We generated HCT-116 cells with partial depletion of BCL9L using a lentiviral small hairpin RNA vector to study the long-term consequences of BCL9L silencing.Treatment with 125 nM reversine for 15 days revealed an increase in colony-forming efficiency in the shBCL9L cells relative to shControl cells.To study whether BCL9L depletion promotes the propagation of aneuploid cells, we treated cells with 125 nM reversine for 15 days followed by a 2-week recovery in drug-free medium, and performed centromeric FISH analysis with centromeric probes for four chromosomes.In untreated cells, BCL9L silencing produced a small but significant increase in the modal centromeric deviation when compared with shCtrl for chromosomes 2 and 8.In shBCL9L cells pre-treated with reversine, this increase was significant for all four probes.Total chromosome counts carried out on metaphase spreads derived from the same cells supported the development of aneuploidy in BCL9L-depleted cells treated with reversine.We did not detect structurally aberrant chromosomes in metaphase spreads.There was no evidence of cytokinesis failure resulting in tetraploidization in BCL9L-depleted cells treated with reversine.Next, we engineered BCL9L truncating mutations in HCT-116 cells similar to those observed in CRC using CRISPR/Cas9.Since most truncating BCL9L mutations preserve HD1, HD2, and HD3 domains, we designed a guide RNA targeting BCL9L C-terminal to the HD3 domain.Sanger sequencing of clones selected in 125 nM reversine for 2 weeks showed a 2.9-fold enrichment in BCL9L mutant clones selected in reversine when compared with untreated colonies.Both monoallelic and biallelic BCL9L truncations appeared to be selected by reversine treatment.Karyotypic analysis of metaphase spreads of HCT-116 with a heterozygous 5-bp deletion C-terminal to the HD3 domain generated by CRISPR/Cas9 showed an increase in aneuploidy in reversine-treated BCL9L−/+ cells in comparison with the WT control.Taken together, these results with BCL9L mutant cell lines and our genomic analysis support a role for BCL9L haploinsufficiency conferring aneuploidy tolerance.The majority of CRCs with BCL9L alterations also harbor TP53 mutations and this co-occurrence seems to coincide with higher wGII scores in tumors.Silencing of BCL9L in HCT-116 TP53-null cells also increased the fraction of surviving cells after 3 days of reversine treatment and the number of resistant colonies in HCT-116 TP53-null and CL-40 cells, a CRC cell line harboring the most frequent TP53 mutation in CRC.These data support the hypothesis that BCL9L depletion results in an additive survival effect in TP53-mutant cells and suggest that loss of BCL9L contributes to aneuploidy tolerance in both TP53-competent and mutant CRC.To determine the role of BCL9L as an aneuploidy suppressor in vivo, we injected BCL9L-depleted or control cells into immunocompromised mice following the protocol shown in Figure 4A.We observed that reversine pre-treatment of shBCL9L cells dramatically improved the engraftment efficiency and growth rate when compared with the rest of the experimental situations.Although untreated shBCL9L cells did not engraft better, they displayed a modest growth advantage when compared with untreated shCtrl cells, although these differences were not statistically significant.We hypothesized that the increased karyotypic diversity in BCL9L-deficient cells pre-treated with reversine might lead to clonal selection of advantageous karyotypes, promoting intratumor heterogeneity of whole chromosome aneuploidies in the mouse xenografts.SNP profiling of the xenografts detected ubiquitous alterations on chromosomes 8, 10, 16, and 17 that are known for parental HCT-116 cells, together with intratumor heterogeneity for whole chromosome 12 in one region of two BCL9L-depleted xenografts and whole chromosome 7 gain in one region of one BCL9L-depleted xenografts.Notably, whole chromosome gains were observed in shBCL9L cells both with and without reversine, substantiating the role of BCL9L loss in the tolerance and propagation of endogenous segregation errors.Control cells did not show any heterogeneous whole chromosome alterations.Gain of the long arm of chromosome 21 was seen in one region of one shCtrl xenograft.These data support the ability of BCL9L depletion to foster intratumor heterogeneity and the propagation of subclones with whole chromosome aneuploidies distinct from other subclones within the same tumor.p53 stabilization mediates apoptosis and cell cycle arrest upon genotoxic stress.Western blot analysis showed that BCL9L silencing strongly inhibited p53 accumulation following reversine treatment in TP53-WT HCT-116, SW48, and C99 cells, an effect reproduced with three siRNA duplexes.We did not detect p53 accumulation in TP53-mutant SNU-C5 cells upon reversine treatment.BCL9L silencing in HCT-116 did not affect TP53 mRNA levels.However, BCL9L silencing in HCT-116 inhibited the induction of the p53 transcriptional targets CDKN1A and BBC3 after reversine treatment.We then examined MDM2 expression due to its important role in regulating p53 stability.Following reversine treatment, we observed an intense band around 60 kDa similar to the MDM2-p60 N-terminal cleavage product previously described that was not detected by C-terminal MDM2 antibodies.MDM2-p60 accumulated mainly in the nucleus where it co-localized with p53.Importantly, MDM2 cleavage was still detectable in TP53-null cells following reversine exposure, and was impaired following BCL9L silencing and reversine treatment in both TP53-WT and TP53-null cells.Active caspase-2 cleaves MDM2, generating the MDM2-p60 fragment as part of a p53 regulatory cascade.MDM2-p60 conserves the p53 binding domain but is devoid of the RING domain.p60-p53 heterodimers cannot be targeted for degradation, which ultimately enhances p53 accumulation.We observed that reversine treatment induced cleavage of caspase-2 in both HCT-116 TP53-WT and to a lesser extent in TP53-null cells.We also observed reduced levels of caspase-2 protein and mRNA in BCL9L-depleted HCT-116 TP53-WT and null cells, which contributed to lower levels of active caspase-2 upon reversine treatment.A reduction in caspase-2 protein following BCL9L silencing was also confirmed in other cell lines and with different siRNA sequences targeting BCL9L.qPCR analysis revealed that reversine treatment did not increase the expression of PIDD mRNA, a gene involved in p53-dependent caspase-2 activation.Cell synchronization and transient reversine exposure revealed that p53 stabilization, MDM2 cleavage, and caspase-2 activation are detectable after one division in the presence of reversine, confirming that one cell division is sufficient to trigger these three events.Similar to BCL9L silencing, caspase-2 depletion by RNAi attenuated both p53 accumulation and MDM2 cleavage upon reversine treatment.These results suggest that reversine induces proteolytic activation of caspase-2 partially independent of p53, and depletion of BCL9L reduces caspase-2 expression that ultimately prevents cleavage of MDM2 and stabilization of p53 following reversine exposure.Given the higher number of karyotypic alterations in CRC with co-occurrence of BCL9L and TP53 alterations and our results showing a BCL9L survival effect in TP53-WT and null backgrounds, we investigated a potential p53-independent role for BCL9L in aneuploidy tolerance.Since caspase-2 cleavage was detectable, but at reduced levels, in TP53-null cells, we assessed the role of other caspase-2 substrates, such as BID, in mediating aneuploidy tolerance.Although BCL9L silencing in TP53-null cells resulted in lower levels of basal BID mRNA, only a moderate reduction in BID protein steady-state levels was observed.In TP53-null cells, reversine treatment induced formation of a 15 kDa band consistent with tBID, derived through caspase-mediated cleavage of BID.Silencing of either BCL9L or caspase-2 attenuated this cleavage.tBID relocalizes to the outer mitochondrial membrane where it activates the mitochondrial apoptotic pathway.Consistent with a p53-independent pro-apoptotic role for BCL9L, BCL9L and caspase-2 depletion prevented polypolymerase cleavage in reversine-treated TP53-null HCT-116 cells.Colony-forming assays confirmed that BID depletion by siRNA had a similar effect to BCL9L silencing in reversine-treated HCT-116 TP53-null cells.We did not find significant changes in the expression of other caspases and mitochondrial apoptotic regulators.Consistent with the results shown above, caspase-2 depletion increased resistance to reversine treatment in BrdU incorporation assays, and also increased tolerance to endogenous segregation errors in HCT-116 cells.Finally, co-transfection of siRNA targeting BCL9L and a caspase-2 expression plasmid reverted the tolerance of reversine treatment mediated by BCL9L depletion.These observations support a mechanism of aneuploidy tolerance whereby caspase-2 suppression in BCL9L-depleted cells enhances the survival of cancer cells after endogenous or drug-mediated segregation errors in both TP53-WT and TP53-null backgrounds.Next, we explored the hypothesis that BCL9L loss drives aneuploidy tolerance through repression of Wnt signaling.BCL9/BCL9L and their binding partners β-catenin and Pygo function as transcriptional co-activators that facilitate the activity of the TCF/LEF family of transcription factors.We confirmed that BCL9L silencing inhibited TCF4 transcriptional activity in reporter assays and expression of Wnt signaling targets.Examination of the ENCODE database revealed a potential TCF4-binding site near the transcription start site of CASP2 that we were able to confirm by TCF4 chromatin immunoprecipitation in HCT-116 cells.Treatment of HCT-116 with PNU74654, a drug that inhibits Wnt signaling by impairing β-catenin binding to TCF4, triggered a statistically significant downregulation of the Wnt targets AXIN2 and MYC along with reduction of caspase-2 mRNA and protein.In addition, treatment of HCT-116 cells with PNU74654 induced reversine tolerance relative to HCT-116 cells treated with reversine alone.In summary, we propose a model in which partial loss of BCL9L results in lower caspase-2 mRNA and protein levels in both TP53-WT and mutant cells, likely mediated through inhibition of TCF4 transcriptional activity at the CASP2 promoter.After chromosome segregation errors, fully functional BCL9L permits transcription and activation of caspase-2, resulting in p53 stabilization via MDM2 cleavage in TP53-WT cells and BID cleavage in TP53-mutant cells, ultimately inducing arrest and apoptosis.In cancer cells, BCL9L dysfunction results in lower levels of caspase-2, and when chromosome missegregation occurs this deficiency results in suboptimal activation of caspase-2, leading to impaired p53 stabilization, tBID formation, and attenuated cell death.Aneuploidy has prognostic relevance in multiple cancer types and is associated with cancer multidrug resistance.These high-risk features of CIN suggest that targeting aneuploid cancer cell populations may have therapeutic potential, emphasizing the importance of understanding the cellular processes that initiate and promote tolerance of aneuploidy.Tumors harbor a wide spectrum of structural and numerical chromosomal alterations ranging from diploid or near-diploid tumors to highly aneuploid samples with more complex karyotypes.Notwithstanding that p53 is closely associated with CIN and aneuploidy in CRC, little is known about somatic events that might cooperate with p53 dysfunction in generating or sustaining the accumulation of chromosomal alterations.Our data provide support for BCL9L as an aneuploidy tumor-suppressor gene in CRC, the loss of which sustains aneuploidy tolerance, both independently of and in cooperation with p53, through repression of caspase-2.These results are supported by studies in caspase-2 knockout mouse models in which transformed cells develop aneuploidy and become more aggressive.The mechanisms leading to p53 accumulation in response to chromosomal missegregation events are unclear.DNA damage, histone phosphorylation and reactive oxygen species have all been proposed as mechanisms of p53 accumulation in CIN cells.Our data reveal that caspase-2 depletion induces tolerance of endogenous chromosome segregation errors and prevents p53 accumulation in response to artificial induction of chromosome segregation errors using an Mps1 inhibitor, reversine, supporting a central role for caspase-2 as an enzyme regulating p53, underpinned by seminal work from other groups.We found that loss of BCL9L prevents cleavage of BID through caspase-2 in TP53-null cells and thereby inhibits apoptosis.This p53-independent role for caspase-2 in the suppression of aneuploidy might operate as a fail-safe mechanism to limit CIN in TP53-mutant tumors, thereby compromising outgrowth of heterogeneous tumor cells and impairing subsequent tumor adaptation.Conceivably, in TP53-WT cells parallel mechanisms of aneuploidy surveillance independent of p53 might reinforce the removal of aneuploid cells in instances where chromosome missegregation events may remain undetected by p53.Our results support the possibility that caspase-2 can be activated upstream of p53 after chromosome missegregation.This emphasizes the need to elucidate the mechanisms of caspase-2 activation, such as its dependence on phosphorylation or proteotoxic stress, frequently observed in aneuploid cells.This process might constitute a mechanism of genome instability sensing that can operate independently of p53.Although evidence supports roles for both BCL9L and BCL9 in the regulation of gene expression that are independent of β-catenin, our data suggest that caspase-2 is a target of the β-catenin/TCF4 transcriptional complex.However, it is unclear whether BCL9L regulates a specific subset of genes distinct from its homolog BCL9.Our data support the “just-right” model for the modulation of Wnt signaling in tumors to render TCF transcriptional activation sufficient for cancer cell viability, while minimizing transcriptional activation of genes associated with cell death.More specifically, partial inhibition of TCF4 transcriptional activity in tumors with excessive Wnt pathway activation through BCL9L dysfunction might reduce caspase-2 expression to levels compatible with cell viability, enhancing tolerance of segregation errors and intratumor heterogeneity.Consistent with these observations, and notwithstanding the limitations of the xenograft evolutionary experiments due to animal welfare considerations and the resulting short time course of such studies, the xenograft data support the ability of BCL9L silencing to propagate intratumor heterogeneity manifested as whole chromosome aneuploidies that are spatially distinct within individual tumors.Based on the genomic analysis presented here, the characterization of BCL9L as an aneuploidy suppressor conforms to a haploinsufficiency model based on results from our analyses of CRC datasets and our functional work.This model has been frequently described for other tumor suppressors such as PTEN, BRCA1, and RAD17.Such observations may begin to explain why the identification of aneuploidy suppressors has proved evasive, and suggest the need for deeper analysis of genomic datasets focusing on haploinsufficiency as a possible mechanism of tolerance to large-scale karyotypic alterations.In summary, these data support a role for BCL9L as an aneuploidy tolerance gene, conforming to criteria for a significantly mutated driver gene and tumor suppressor in CRC.Understanding aneuploidy tolerance mechanisms more widely, and the BCL9L/caspase-2/BID axis specifically, may unravel potential vulnerabilities in aneuploid cancers, which could be exploited to limit intercellular heterogeneity, a substrate for selection and tumor evolution.Tissue collection was approved by an ethics committee, and all individuals included in this study had provided written informed consent for the analysis presented.Five thousand HCT-116 cells per well were seeded on 96-well plates with siRNA transfection medium and DMSO or 250 nM reversine.Cells were grown for 3 days and cell viability was measured by Cell Titer Blue.The surviving fraction for each siRNA pool was calculated as the ratio of the fluorescent Cell Titer Blue signal of treated wells between untreated wells.Data shown in Figure 1A were normalized to siCtrl2.For siRNA short-term survival, cells were plated in transfection medium and 250 nM reversine for 3 days.Cell number was measured by DAPI staining and automated imaging, cell viability was measured with Cell Titer Blue or alternatively, cells were harvested and analyzed after 1 hr of BrdU incorporation.For colony-forming assays with siRNA, cells were transfected for 3 days and replated in serial dilutions in the presence or absence of reversine.After 5 days of treatment, reversine-containing medium was replaced by drug-free medium and cell colonies grown until the appropriate size was reached.For long-term colony-forming assays, cells were treated for 15 days with 125 nM reversine.For development of aneuploidy, stable BCL9L knockdown and control cells were treated for 15 days with 125 nM reversine.Reversine-containing medium was replaced by drug-free medium and cells were allowed to recover for 2 weeks.Next, cells were grown on glass slides and centromeric FISH was performed.Centromeric signals were counted and modal centromeric variation was calculated as the fraction of cells with centromeric signals different from the modal number within the population.All animal regulated procedures were approved by The Francis Crick Institute BRF Strategic Oversight Committee that incorporates the Animal Welfare and Ethical Review Body and conformed with UK Home Office guidelines and regulations under the Animals Act 1986 including Amendment Regulations 2012.For a complete description see Supplemental Experimental Procedures.Conceptualization, C.L.-G., L.S., I.T., and C.S.; Methodology, C.L.-G., L.S., N.McG., E.G., A.J.R., R.B., H.D., and C.S.; Software, Formal Analysis, and Data Curation, C.L.-G., N.McG., N.J.B., S.H., F.F., A.S., M.K., and C.S.; Investigation and Validation, C.L.-G., L.S., S.H., E.G., A.J.R., N.M., S.B., B.P., D.O., M.N., and R.B.; Resources, A.J.R., E.D., H.D., G.S., B.S.-D., and I.T.; Writing – Original Draft, C.L.-G.and C.S.; Writing – Review and Editing and Visualization, C.L.-G., L.S., N.McG., S.H., N.J.B., E.G., I.T., and C.S.; Supervision, C.L.-G., I.T. M.N., H.D., and C.S.; Funding Acquisition, C.S.
Chromosomal instability (CIN) contributes to cancer evolution, intratumor heterogeneity, and drug resistance. CIN is driven by chromosome segregation errors and a tolerance phenotype that permits the propagation of aneuploid genomes. Through genomic analysis of colorectal cancers and cell lines, we find frequent loss of heterozygosity and mutations in BCL9L in aneuploid tumors. BCL9L deficiency promoted tolerance of chromosome missegregation events, propagation of aneuploidy, and genetic heterogeneity in xenograft models likely through modulation of Wnt signaling. We find that BCL9L dysfunction contributes to aneuploidy tolerance in both TP53-WT and mutant cells by reducing basal caspase-2 levels and preventing cleavage of MDM2 and BID. Efforts to exploit aneuploidy tolerance mechanisms and the BCL9L/caspase-2/BID axis may limit cancer diversity and evolution.
496
Simultaneous Particle Size Reduction and Homogeneous Mixing to Produce Combinational Powder Formulations for Inhalation by the Single-Step Co-Jet Milling
Delivering antibiotics to the lungs that is the site of infection at an optimal drug concentration is central to effective treatment of respiratory infections.1,However, for many antibiotics, only a small fraction of oral and parenterally administered antibiotics reaches the infection sites in the lungs.2,3,In addition, there is a considerable risk of emergence of resistance and severe side-effects of systemically administered antibiotics which are grand challenges to the clinical management of chronic lung infections.Inhalation is a promising route for administering appropriate dosages of drugs directly to the lungs.4,5,Compared with systemic/oral therapy, inhalation therapy ensures rapid clinical responses with relatively lower doses while reducing the risk of systemic side-effects and the risk of emergence of resistance.1,4,6,Spray drying and jet milling are 2 commonly used techniques to develop dry powder inhaler formulations.7-10,Spray drying has been used to produce combination formulations; however, some spray-dried drugs may generate relatively less thermodynamically stable amorphous forms.The unstable amorphous particles may convert into more thermodynamically stable crystalline form,11 which could alter the aerosolization behavior.12-14,Although the jet-milled particles are in general very cohesive with poor flowability and unsatisfactory aerosolization behavior,15,16 jet milling is still the mainstay approach to manufacture inhalable drug particles in the industrial practice.7,Studies have also shown that co-jet milling with lubricants may potentially facilitate simultaneous particle size reduction and particle surface coating, which resulted in improved aerosolization performance.16-20,However, homogeneous mixing of 2 cohesive jet-milled drug particles is a grand challenge for pharmaceutical manufacturing on account of their cohesive nature that forms strong and random agglomerates.21,Addition of coarse lactose carriers may be an option to improve homogeneity of 2 drugs for low-dose inhalation medications; but for high-dose inhalation medicines such as antibiotics, large amounts of lactose carriers should be avoid to reduce the total powder load.22,23,Here, our hypothesis is that co-jet milling 2 drugs can generate a homogeneous DPI formulation.Colistin and ciprofloxacin are selected as 2 model drugs here as these 2 antibiotics have shown synergistic antimicrobial activities against resistant Pseudomonas aeruginosa.24,25,Colistin DPI has been approved in the Europe, and ciprofloxacin DPI is under development.26,Combining colistin and ciprofloxacin into a single DPI product may improve patients’ compliance and maximize the antimicrobial activities.Ciprofloxacin hydrochloride monohydrate and colistin sulfate were supplied by Beta Pharma.Acetonitrile was supplied by Merck.The content uniformity of ciprofloxacin and colistin in the resultant combination formulations were quantified.Briefly, 10 samples of 10 ± 0.5 mg for each formulation were weighed and dissolved in mobile phase, which was then diluted to an appropriate concentration for quantification of ciprofloxacin and colistin.The drug quantification methods for content homogeneity and dispersion tests are provided above.One-way analysis of variance with post hoc Tukey test was employed to determine the statistical difference for 3 groups and more using a Prism software.It was deemed as significantly different if p < 0.05.An initial blend of ciprofloxacin and colistin was prepared manually by stirring 2 powders for 5 min in a mortar and pestle.The resultant blend was co-milled using a jet-mill at feed rate of 1 g/min, grinding pressure of 5 bar, and feeding pressure of 6 bar.Each pure drug was jet-milled at the same processing parameters.A scanning electron microscope was applied to assess particle morphology.Briefly, the powder sample was coated with a platinum film at 40 mA for 1 min using a sputter coater, and images were taken by the built-in software.The particle size analysis was conducted with a Mastersizer 3000 equipped with an Aero-S for dry powder dispersion, using the laser diffraction method.The compressed air of 4 bar was used to produce powder dispersion for each sample, and it was measured for 5 s.The feed rate of 50%-60% was used for keeping the suitable laser obscuration level at 2%-6%.A Rigaku Smartlab™ diffractometer with a Cu-Kα radiation source was used to evaluate powder crystallinity.The diffraction patterns were determined from 5° to 40° 2θ at a scan speed of 5°/min at 40 kV.A dynamic vapor sorption equipment was applied to evaluate moisture sorption property of the powder samples.Each measurement has a sorption cycle and a desorption cycle between 0% and 90% RH at the 10% RH increment.Powders were considered to reach equilibrium, when the change in mass with respect to time, dm/dt, was less than 0.002% per minute.Distribution of 2 drugs in each combinational formulation was determined using a ToF-SIMS at 30 kV.13,ToF-SIMS data were collected from 4 areas per sample.The fragments of m/z ∼332 atomic mass unit were selected as the exclusive element for ciprofloxacin, which is corresponding to the .The fragments of m/z ∼30 amu and ∼86 amu were chosen for colistin, which is corresponding to and .13,Contents of ciprofloxacin and colistin were measured by an established HPLC method.13,Briefly, an HPLC system and an Eclipse Plus column were used to detect ciprofloxacin and colistin both at 215 nm.The mobile phase consisted of 76% w/w 30 mM sodium sulfate solution, and 24% v/v acetonitrile was running at 1.0 mL/min.The standard curves were linear in the concentration ranges of 0.01-0.5 mg/mL for colistin and 0.004-0.125 mg/mL for ciprofloxacin.A next-generation impactor was used to determine in vitro aerosolization performance.27,Each powder sample was filled into the Size 3 HPMC capsules.The capsules were loaded in a RS01 DPI device, which has a similar design to the Osmohaler.Briefly, 4 L of air was drawn through the inhaler by a vacuum pump to generate 2 airflow rates of 60 L/min for 4 s and 100 L/min for 2.4 s, which are corresponding to pressure drop values of ∼1.6 kPa and ∼4 kPa across the RS01 DPI device, respectively.Fine particle fraction was calculated as the fraction of drug with an aerodynamic diameter <5 μm over the total recovered drug.Triplicates were conducted for each formulation.The jet-milled ciprofloxacin and colistin particles have irregular shapes.Some small particles were sticking on the relatively large ones.There was no apparent difference between different jet-milled particles.All jet-milled formulations had similar size distributions with D50 < 2.1 μm, and D90 was <5.4 μm indicating that most particles were very small.In general, it is challenging to obtain homogeneous mixtures of 2 very cohesive particles because the cohesive particles stick to each other strongly.28,29,The formulations produced via co-jet milling demonstrated an acceptable content homogeneity, with drug contents in 90%-110% and acceptance value ≤15%.Powder X-ray Diffraction patterns of the jet-milled ciprofloxacin showed some crystalline peaks.The P-XRD patterns of colistin had no peaks indicating the amorphous nature.The co-jet milled samples consisting of colistin and ciprofloxacin also showed peaks corresponding to ciprofloxacin.Previous studies have shown that the spray-dried ciprofloxacin is amorphous, which tends to crystallize upon storage and affects the aerosolization.12,Because the jet-milled ciprofloxacin was crystalline, it is likely to have better physical stability than the spray-dried amorphous ciprofloxacin particles.The jet-milled colistin absorbed substantial amounts of water at the elevated RH owing to its amorphous nature.30,In contrast, the jet-milled ciprofloxacin absorbed substantially lower amounts of water at all humidity levels as compared with the jet-milled colistin.The moisture absorption levels for the co-jet milled formulations are between the jet-milled ciprofloxacin and jet-milled colistin.In addition, the water uptake was completely reversible indicating no apparent crystallization events.There was hysteresis between sorption and desorption profiles for the colistin-containing formulations.This is owing to the hygroscopic nature of amorphous colistin with moisture trapped into the invaginations or cores of particles during sorption, while a slower removal of water during the desorption process.31,32,We expect no coating shall occur during the co-jet milling process; surface characterization technique of ToF-SIMS was used to qualitatively evaluate distributions of 2 drugs in the co-jet milled powder formulations.Our ToF-SIMS measurement had a relatively high spatial resolution of ∼250 nm.Figure 2 showed that in general ciprofloxacin and colistin particles are relatively homogeneous in the co-jet milled powders.In Figure 3a, the data at 0% colistin concentration are for the ciprofloxacin jet-milled alone formulation, 25% for the colistin-ciprofloxacin 1:3 formulation, 50% for the colistin-ciprofloxacin 1:1 formulation, 75% for the colistin-ciprofloxacin 3:1 formulation, and 100% for the colistin jet-milled alone formulation.The results showed that the jet-milled ciprofloxacin alone powder showed a relatively lower FPF of 57.5 ± 1.9, whereas the colistin alone powder had a relatively higher FPF of 80.2 ± 1.7 at the flow rate of 100 L/min.FPF values of the co-jet milled powders was significantly higher than the jet-milled ciprofloxacin alone powder but significantly lower than that of the jet-milled colistin alone formulation at both 60 L/min and 100 L/min.It is noteworthy that for each co-jet milled formulation, no significant difference was measured in FPF between colistin and ciprofloxacin.Deposition data also showed that in general 2 drugs have similar deposition patterns for each formulation except for those in the throat, S2, S4 and S5.Such similar FPF and deposition patterns in each co-jet milled formulation confirmed the formation of relatively homogeneous mixtures, despite very different aerosolization behavior of 2 jet-milled pure drugs; though such similarity in deposition patterns for 2 drugs does not reach the same level to the co-spray-dried particles, which incorporate 2 drugs in a single particle.13,33-35,Another possibility is that 2 jet-milled drugs form preferential agglomerates according to the loaded drug ratios.Mixing 2 cohesive powders into a uniform blend is extremely challenging owing to formation of strong and random agglomerates.Majority of current combinational low-dose DPI products are blended mixtures of coarse carriers and separately jet-milled drug particles or made of packing 2 drug formulations in 2 separate blisters.However, for high-dose medications such as antibiotics, large amount of coarse carriers should be avoided.Our study has shown that co-jet milling of colistin and ciprofloxacin achieved simultaneous size reduction and homogeneous mixing of 2 drugs, as supported by content uniformity, ToF-SIMS, and aerosol deposition data.The potential limitations of co-jet milling 2 drugs could be difficult in controlling the particle size of each drug, specifically when the mechanical properties of 2 drugs are different.Understanding such relatively homogeneous mixtures are critical to produce inhalation products with superior quality.
Homogeneous mixing of 2 cohesive jet-milled drug powders is a challenge for pharmaceutical manufacturing on account of their cohesive nature resulting in the formation of strong and random agglomerates. In this study, colistin and ciprofloxacin were co-jet milled to develop combinational antibiotic dry powder formulations for inhalation. The properties of particle size, morphology, content uniformity, and in vitro aerosolization were evaluated. The distribution of 2 drugs in the co-jet milled powders was assessed using time-of-flight–secondary ion mass spectrometry. The co-jet milled powders demonstrated an acceptable content uniformity indicating homogeneity. In general, time-of-flight–secondary ion mass spectrometry images showed relatively homogeneous distributions of ciprofloxacin and colistin in the co-milled formulations. Importantly, the 2 drugs generally had the similar fine particle fraction and deposition behavior in each combinational formulation supporting that the particle mixtures were relatively homogenous and could maximize the antimicrobial synergy. In conclusion, co-jet milling could be a viable technique to produce the combination powders for inhalation.
497
Effect of methionine-35 oxidation on the aggregation of amyloid-β peptide
Alzheimer's disease is characterized by progressive neuronal death and the accumulation of protein aggregates in certain brain areas.The aggregation of amyloid-β peptide into extracellular amyloid plaques is identified as the key molecular event in AD, however, the molecular cascade leading to the death of neurons remains elusive.The increase of oxidative stress levels in certain brain areas is also characteristic to the disease and this increase is assumed to be an early event in AD .The major risk factor for the pathogenesis of AD is aging and according to the free radical hypothesis the major cause of aging is the accumulation of ROS and the accumulation of oxidative damage .The relationship between OS and AD progression is a complex one: elevated levels of OS can be both, the risk factor and the consequence of AD progression .The toxicity of Aβ aggregates is assumed to arise from their ability to generate free radicals , which depends on the presence of copper ions.Considering the involvement of Aβ in oxidative processes, the Met35 is one of the most intriguing amino acid residues in the peptide molecule.Met35 has the most easily oxidized side chain in the peptide and it is partially oxidized in post mortem amyloid plaques .There has been many speculations about the role of Met35 in the formation of amyloid plaques and peptide toxicity: it has been suggested that Aβ mediated generation of ROS is initiated by the Met 35 residue .On the other hand, a particular function in neuroprotection is also proposed for Met35 due to the antioxidant character of the thioether group .The suggestions about the involvement of Met35 in toxic mechanisms is supported by the observation that Met35 oxidation state is critical for Aβ synaptotoxicity – the oxidized form was clearly less toxic that the reduced one .It has been shown that oxidation of Met35 reduces the Aβ42 toxicity in human neuroblastoma cells , however, the substitution of Met35 to valine and norleucine had no effect on the toxicity of Aβ .Methionine residues can be oxidized by two different mechanisms .First, they can be spontaneously oxidized to methionine sulfoxide in a two electron process by some oxidants such as H2O2 and molecular oxygen.This oxidation pathway can be considered relatively benign since no reactive oxygen species or free radicals are formed.The oxidation of methionine to sulfoxide is a common reaction in living organism and the oxidation can be reversed by the methionine sulfoxide reductases, enzymes that reduce the sulfoxide and are an essential part of redox detoxification mechanisms in the living organisms.Methionine oxidation to sulfoxide is demonstrated to inhibit fibril formation and this observation has also lead to various speculations about the essential role of the oxidation of Met35 in AD.The sulfoxide can be further oxidized to sulfone, however, this reaction does not occur in physiological environments.From the viewpoint of AD progression the second mechanism, one-electron oxidation of methionine to a highly reactive radical cation intermediate, is more intriguing.The single-electron mechanism can be catalyzed by copper ions that can have an essential role in AD etiology since they bind to Aβ with high affinity and enhance its aggregation whereas the electrochemically active copper ions buried within the aggregates can generate ROS and cause OS .An intriguing question is whether the radical cation generated from Met35 is an essential intermediate in the pathway of Aβ generated oxidative stress or is this just a supplementary process that does not contribute to ROS generation and OS.In order to answer these questions we studied the kinetics and products of the oxidation of Aβ by H2O2 in the presence and absence of copper ions as the catalysts of one electron oxidation processes.Methionine oxidation slightly decreased the rate of in vitro fibrillization of the Aβ peptides but did not changed its ability to catalyze the formation of ROS.Lyophilized Aβ40 and Aβ42 peptides NaOH salts or HFIP forms were purchased from rPeptide.HEPES, Ultrapure, MB Grade was from USB Corporation, 1,1,1,3,3,3-hexafluoro-2-propanol and Thioflavin T were from Sigma Aldrich.NaCl was extra pure from Scharlau.All solutions were prepared in fresh MilliQ water.Stock solution of Aβ peptides was prepared as follows: 1 mg of the peptide was dissolved in HFIP at a concentration 500 μM to disassemble preformed aggregates .The solution was divided into aliquots, HFIP was evaporated in vacuum and the tubes with the peptide film were kept at −80 °C until used.Before using the Aβ HFIP film was dissolved in water containing 10 mM NaOH at a concentration of 10–20 μM.After 5 min incubation the Aβ stock solution was dissolved with buffer and used for experiments.Fluorescence spectra were collected on a Perkin-Elmer LS-45 fluorescence spectrophotometer equipped with a magnetic stirrer.Fibrillation was monitored using ThT fluorescence.If not otherwise stated, fresh Aβ stock solution was diluted in 20 mM HEPES and 100 mM NaCl, pH 7.4 containing 3.3 μM of ThT to a final concentration of 5 μM.400 μl of each sample was incubated at 40 °C if not otherwise stated.ThT fluorescence was measured at 480 nm using excitation at 445 nm.An aliquot of 5 μl of sample was loaded on a Formvar-coated, carbon-stabilized copper grid.After 1 min, the excess solution was drained off using a Whatman filter paper.The grid was briefly washed and negatively stained with 5 μl of 2% uranyl acetate.The grid was air-dried and then viewed on a Tecnai G2 BioTwin transmission electron microscope operating with an accelerator voltage of 80 kV.Typical magnifications ranged from 20,000 to 60,000×.20 Aβ M of alkaline Aβ solution in 0.1% NaOH was diluted with equal volume of 0.1 M phosphate buffer containing hydrogen peroxide and Cu.Final concentrations of H2O2 was 1% and the concentrations of copper ions were 0.1 and 10 µM.The kinetics of Aβ oxidation and the disappearance of monomers from the solution were monitored with matrix-assisted laser desorption/ionization mass spectrometry using the energy absorbing matrix α-cyando-4-hydroxycinnamic acid .Matrix CHCA was dissolved in 60% acetonitrile and 0.3% trifluoroacetic acid to a concentration of 10 mg/ml, containing 0.3 µM bovine insulin as an internal standard.Samples were mixed with matrix and 1 µl was spotted on the MALDI plate.MALDI MS spectra were acquired by Voyager-DE™ STR Biospectrometry Workstation in linear mode using automated program.Instrument parameter/settings: accelerating voltage 25,000 V; mass range 1500–10,000 Da; delay time 485 ns; grid voltage 93%; laser intensity 2200 V, shots per spectrum – 40; accumulated spectrum – 10.The site of oxidation was determined by sequencing the oxidized peptide on ESI-MS.10 µM Ab42 in 100 mM ammonium acetate buffer was injected into the electrospray ion source of QATAR Elite ESI-Q-TOF MS instrument by a syringe pump at 7 µl/min.ESI MS spectra were recorded for 5 min in the m/z region from 300 to 1500 Da with the following instrument parameters: ion spray voltage 5500 V; source gas 45 l/min; curtain gas 20 l/min; declustering potential 45 V; focusing potential 260 V; detector voltage 2450 V; collision energy 50 V; collision gas 5 l/min; precursor-ion mode.Sodium dodecyl sulfate polyacrylamide gel electrophoresis was performed using Mini-PROTEAN Tetra System.Samples were mixed with loading buffer, maintained at room temperature, applied to Bicine–Tris 15%T/5%C gel and resolved in a cathode buffer with 0.25% SDS .Gels were fixated in glutaraldehyde/borate buffer solution for 45 minutes and stained with silver according to a protocol .The oxidation of Aβ42 by H2O2 was studied by MALDI mass spectrometry.Fig. 1 shows that the peptide was easily oxidized in the presence of 200 mM H2O2.During the incubation with hydrogen peroxide the molecule mass of the peptide increased by 16 units and the process was almost complete within 40 min.The oxidation of Met35 residue was confirmed by ESI MS/MS sequencing of the resulting oxidized peptide.Kinetic analysis of the MALDI MS data showed that the disappearance of reduced Aβ and the increase of AβMet35ox followed the first order kinetics.In the absence of copper ions AβMet35ox was the only product of the oxidation: no side products were detected in a significant amount.The oxidation pattern of Aβ in the presence of copper ions was more complex.As the copper complex of Aβ42 tends to form a precipitate during 30 min of incubation the oxidation in the presence of copper ions was studied using Aβ40 peptide.Fig. 2 shows that the addition of a single oxygen atom was still the fastest process; however, the widening of the peptide peak in MS showed that subsequently a large number of non-specific modifications reactions in the Aβ molecule occurred.When Cu ions were added to the reaction mixture after the oxidation of Met35 by H2O2, a similar pattern of nonspecific modifications was observed, suggesting that the single-electron oxidation processes in Aβ does not depend on the oxidation state of Met35.In principle, a peptide dimer can also form by dityrosine crosslinking during peptide oxidation.SDS PAGE shows that peptide dimers do not form under our experimental conditions, peaks of dimers and trimers were also not detected by MALDI MS.The fibrillization of Aβ was studied under the conditions of intensive agitation where the fast fibrillization of the peptide proceeds with good reproducibility .Incubation of unpurified oxidized peptide under these conditions did not reveal an increase in ThT fluorescence; however, the peptide peak in the MALDI MS spectrum disappeared showing fibrillation and in a TEM image typical non-matured fibrils were observed.As the oxidizing agent can interfere with the ThT based detection method we repeated the experiments after removing the oxidizing solution by lyophilization and redissolving the peptide in HEPES buffer.Both oxidized peptides, Aβ42 and Aβ40, showed a typical sigmoidal fibrillization curve, whereas the value of the fibrillization rate constant, k, was threefold lower than that of the regular peptide.The lag-phase for the oxidized peptide was also longer however, the resulting fibrils look similar in TEM.In this work we studied the oxidation of Aβ in the presence of two physiologically relevant redox-active compounds H2O2 and copper ions.Low concentrations of H2O2 are always present in living organisms and take part in various redox processes; copper ions bind to Aβ with high affinity in a catalytically active form and are present in amyloid plaques.The aim of the study was to establish the role of the Met35 residue in the oxidation and peptide aggregation processes.In the absence of copper ions the Met35 residue in Aβ molecule was readily oxidized to sulfoxide and this was almost the only modification observed.Met35 oxidation only slightly inhibited Aβ fibrillization, thus, we suggest that the possible variation in the Met oxidation state should not have a large effect on the plaque formation in vivo.The amyloid fibril formation is a complex autocatalytic process and the fibrillization in agitated solutions models the fibril growth phase, but not the initiation of the fibrillization process or the formation of the protofibrils and first fibrils.In the presence of copper ions that form a high affinity complex with Aβ, the oxidation was more complex, the addition of the first oxygen was still the fastest process, however, it was accompanied by unspecific modifications of several residues in the peptide.Met35 oxidation had no effect on the oxidative behavior of Aβ complex with copper ions – when Met35 was oxidized in the absence of copper, the addition of copper still lead to the appearance of diverse spectrum of oxidized peptides.These modifications may include the oxidation of His and Phe residues in Aβ and Glu 1 decarboxylation observed in the presence of oxygen and ascorbic acid .Thus, it can be concluded that Met35 residue is not a part of the radical generating mechanism of the Aβ–Cu complex.The oxidation state of Met35 enhances the toxicity of the artificial truncated Aβ25-35, which affects mitochondria .However, the high toxicity of this derivative is dependent on the C-terminal position of methionine and does not apply to the longer Aβ variants such as 25–36 , thus, this process is not related to the toxic effect of the full-length Aβ and amyloid plaques.It should also be noted that we did not observed the formation of potentially highly toxic Aβ dimers under our experimental conditions e. g. the oxidative dimerization of Aβ due to tyrosine crosslinking did not occur at considerably higher concentrations of the peptide and H2O2 than those in the brain.However, the oxidative dimerization of Aβ in vivo can be catalyzed by enzymes .It should also be noted that the Aβ “dimers” from biological material have never been analyzed chemically e. g. they are not necessarily dimers, but they can be longer peptides containing the Aβ sequence .Methionine added to the environment also does not serve as a reducing agent for the Cu–Aβ complex , thus, it can be concluded that from the viewpoint of redox ability and fibril formation the possible oxidation of Met35 residue is not an important property of the peptide.However, Met35 can play a role in AD pathogenesis due to putative interactions in the biological systems.It has been shown that in a Caenorhabditis elegans model of inclusion body myositis the knockout of MSRA-1 reduces the aggregation of Aβ into insoluble aggregates , however in this case the aggregation is intracellular.Even small differences in the peptide aggregation properties may be crucial in triggering the molecular events leading to the disease, however the lower amyloidogenity and the unaffected ability to catalyze redox reactions when bound to copper ions suggest that Met35 oxidation is most likely not essential in AD.Recently it has been shown that Aβ with oxidized Met 35 that does not cross the neuronal plasma membrane and is not uploaded from the extracellular space has no effect on synaptic plasticity when applied extracellularly .
Aggregation of Aβ peptides into amyloid plaques is considered to trigger the Alzheimer's disease (AD), however the mechanism behind the AD onset has remained elusive. It is assumed that the insoluble Aβ aggregates enhance oxidative stress (OS) by generating free radicals with the assistance of bound copper ions. The aim of our study was to establish the role of Met35 residue in the oxidation and peptide aggregation processes. Met35 can be readily oxidized by H2O2. The fibrillization of Aβ with Met35 oxidized to sulfoxide was three times slower compared to that of the regular peptide. The fibrils of regular and oxidized peptides looked similar under transmission electron microscopy. The relatively small inhibitory effect of methionine oxidation on the fibrillization suggests that the possible variation in the Met oxidation state should not affect the in vivo plaque formation. The peptide oxidation pattern was more complex when copper ions were present: addition of one oxygen atom was still the fastest process, however, it was accompanied by multiple unspecific modifications of peptide residues. Addition of copper ions to the Aβ with oxidized Met35 in the presence of H2O2, resulted a similar pattern of nonspecific modifications, suggesting that the one-electron oxidation processes in the peptide molecule do not depend on the oxidation state of Met35 residue. Thus, it can be concluded that Met35 residue is not a part of the radical generating mechanism of Aβ-Cu(II) complex.
498
Accidental ecosystem restoration? Assessing the estuary-wide impacts of a new ocean inlet created by Hurricane Sandy
There are more than 2,200 coastal lagoons in the world, and in the United States lagoons line more than 75% of the East and Gulf Coasts.Barrier island lagoon estuaries tidally exchange with ocean waters via inlets, a process that influences many estuarine properties including temperature, salinity, water clarity, and productivity.Low rates of tidal exchange in many lagoonal estuaries allow them to accumulate nutrients from freshwater sources, and in turn, support highly productive food webs, but also make them vulnerable to anthropogenic eutrophication.Due to the delicate and ephemeral nature of barrier islands, many lagoonal estuaries are prone to large-scale disturbance by tropical storms and hurricanes which often create breaches of the protective barrier islands, resulting in the formation of new inlets that enhance coastal ocean exchange.The newly formed inlets can induce alterations of phytoplankton assemblages within estuaries via changes in circulation and/or nutrient availability.The south shore of Long Island, NY, is composed of a series of bar-built, lagoonal estuaries known as the South Shore Estuary Reserve that formed nearly 10,000 years ago by glacial activity, which collectively stretch more than 100 km west from New York City and encompass nearly 300 km2.Great South Bay is the largest of the barrier island estuaries on Long Island.It has been estimated that 28 inlets have been temporarily created by storms in GSB barrier islands over the last 300 years.On 29-October-2012, Hurricane Sandy created a barrier island breach within the eastern extent of GSB in a location known as ‘Old Inlet’, named as such because there was an inlet present in this area during the early 1800s.This breach, called the ‘New Inlet,’ has likely impacted many physical, chemical, and biological aspects of this ecosystem.Given the predicted climate change-induced intensification of hurricanes and cyclones this century, the occurrence of barrier island breaches such as the one studied here are likely to become more common in the future.Historically, GSB has experienced phase shifts in phytoplankton community assemblages and multiple types of harmful algal blooms.The most common HAB in GSB during the past four decades have been brown tides caused by Aureococcus anophagefferens.These blooms have been closely associated with regions of long water residence time and thus may be strongly influenced by new ocean inlets.Similarly, pathogenic and indicator bacteria concentrations can be influenced by water residence time and higher indicator bacteria levels have been observed in systems with lower flushing and oceanic inputs."In the 1970s and early 1980s, GSB hosted the largest hard clam fishery in the U.S. but intense harvesting led to a sharp decline in the hard clam population through the late 1980's.Since 1985, the brown tides caused by A. anophagefferens have further contributed to the decline of resident hard clam populations.Weiss et al. examined the growth and condition of hard clams across GSB in 2005 and found that juvenile clams in central GSB had significantly faster growth rates and lower mortality rates relative to sites closest to the Fire Island Inlet and in Bellport Bay where the New Inlet was recently formed.Instantaneous juvenile clam growth rates were positively correlated to temperatures below 24 °C and were also significantly correlated with several indicators of suspended food quantity and quality which co-varied independently of temperature.The goal of the current study was to quantify changes in the GSB ecosystem in response to the New Inlet that was formed by Hurricane Sandy in October 2012.Plankton, indicator bacterial levels, and juvenile bivalve growth rates in GSB and Moriches Bay before and after the formation of the New Inlet were compared.Water quality measurements before and after the formation of the New Inlet were also contrasted to assess changes in response to enhanced ocean flushing and how such changes may have altered plankton communities and clam performance across GSB and adjacent lagoons.Discrete water samples were collected via small boats at least biweekly from April through October and monthly November through March from 2013 through to 2015 at stations near Fire Island Inlet, in central GSB and Patchogue Bay, near New Inlet, and in western Moriches Bay.At each station, temperature, salinity, and dissolved oxygen were measured at the surface and near the bottom using a YSI® 556 sonde to confirm that the water column, which was typically 2 m deep, was well-mixed as expected.Secchi depths were also recorded.Water samples were collected in 20-L carboys at ∼0.5 m at each station and brought back to the laboratory for further analyses.An 18-year data set compiled by the Suffolk County Department of Health Services water quality monitoring program was analyzed to compare conditions before and after the formation of the New Inlet in GSB.Temperature, salinity, and dissolved oxygen were measured using automated YSI probes, chlorophyll a and nutrients were measured using oceanographic standard methods, and secchi disc depths were quantified.Long-term averages for all parameters, except water temperature and dissolved oxygen, were generated for all data from SCDHS stations in Moriches Bay and GSB from 2000 to 2012 and compared to 2013–2017.To account for seasonality in water temperature and dissolved oxygen, comparisons were made only for data collected during summer months, defined here as June 21 – September 20.Pre- and post-New Inlet water quality conditions were compared utilizing GIS to visualize spatial changes using an anomaly model, i.e. the difference between mean conditions before and after the formation of the New Inlet.Values between sampling stations in GIS figures were interpolated through the simple kriging geostatistical method using ESRI® ArcGIS® 10 with the Geostatistical Analyst extension.Historical comparisons were also made to a study of phytoplankton communities in GSB and Moriches Bay conducted during 2004 and 2005.During that study, four of the same stations near Fire Island Inlet, Mid-Bay, southern Bellport Bay and western Moriches Bay were sampled approximately bi-weekly from spring through to the fall.Patchogue Bay was not sampled by Curran.For both the 2004/2005 and the present study, plankton >10 μm via microscopy and analyses of algal pigments via high-performance liquid chromatography were quantified in the same way to facilitate comparisons.Surface seawater conditions across GSB and Moriches Bay after the formation of the New Inlet were mapped using small boat surveys.A YSI EXO2 sonde equipped with probes to measure temperature, salinity, dissolved oxygen, and in vivo chlorophyll a fluorescence received flow-through water from a pipe affixed at 0.25 m depth to a small boat, and horizontal transects were made across the south shore lagoons of Long Island from the Fire Island Inlet to the Shinnecock Inlet with a data-logging GPS unit and sufficient north-south longitudinal coverage to provide high-resolution mapping.A mapping cruise in August of 2013 is presented to assess peak summer temperature conditions.Tens of thousands of GPS-grounded data points were used to produce horizontal distribution maps of dissolved oxygen, temperature, salinity, and chlorophyll a using ESRI® ArcGIS® 10 with the Geostatistical Analyst extension and an ordinary kriging algorithm to interpolate between random point data.Discrete water samples were obtained to ground-truth continuous measurements of dissolved oxygen and chlorophyll a; in vivo chlorophyll a was significantly correlated with extracted chlorophyll a and in vivo measurements were adjusted to represent the true concentrations.Triplicate chlorophyll a samples were collected on GF/F glass fiber filters, frozen, and analyzed by standard fluorometric methods."Duplicate plankton samples were preserved with Lugol's iodine for settling chamber analysis using an inverted light microscope.Plankton >10 μm were grouped as diatoms and dinoflagellates.Densities of the brown tide alga, Aureococcus anophagefferens, were quantified on a flow cytometer using a species-specific immuno-assay on water samples preserved in 1% glutaraldehyde.Triplicate samples for flow cytometry were preserved with 10% formalin, flash frozen in liquid nitrogen, and stored at −80 °C until analysis on a CytoFLEX flow cytometer.Analysis of flow cytometric fluorescence and light scatter patterns provided abundance, size, and cell fluorescence for all identifiable cell populations <20 μm including phycoerythrin-containing cyanobacteria.Samples for HPLC algal pigment analysis were collected on GF/F glass fiber filters, flash frozen in liquid nitrogen, and stored frozen at −80 °C.Samples were analyzed via a C8 HPLC column using a methanol-based reversed-phase gradient solvent system, a simple linear gradient, and a column temperature of 60 °C.Five phyto-pigments found either exclusively or mostly in single classes of phytoplankton were quantified to represent five algal groups.Peridinin was used as an indicator of dinoflagellates, alloxanthin was analyzed as a proxy for cryptophytes, lutein to indicate chlorophytes, zeaxanthin to represent cyanobacteria, and fucoxanthin to represent diatoms as well as chrysophytes."Additionally, the pigment 19'butanoyloxyfucoxanthin has been shown to be a pigment marker of the brown tide alga, Aureococcus anophagefferens.Characterization of the plankton communities by FlowCAM was performed as described in Gobler et al.Fecal coliform data were provided by the New York State Department of Environmental Conservation from shellfish growing areas from Nicoll Bay in Great South Bay to Moriches Bay.NYSDEC quantified fecal coliform levels using standard Most Probable Number, 5 tube assays.Data were obtained for 2010–2014, with January 2010 through to October 2012 used as a baseline for conditions prior to the formation of the New Inlet.Data from January 2013 through December 2014 were analyzed to identify any effects of the New Inlet and associated changes in salinity and circulation on fecal coliform levels.For most sites, samples were collected monthly, although some extended gaps in sampling existed following storm events as well as during certain seasons depending on site location.Normally NYSDEC collects 7–12 samples per site per year.GIS plots were created in ArcGIS v10.0 using data from all sampling sites examined.Inverse distance weighted interpolation was used to identify spatial trends between points.Interpolation was performed on mean fecal coliform levels at each site before and after formation of the New Inlet, as well as the percent change observed at each site.Two cohorts of one year old, hatchery-raised juvenile Mercenaria mercenaria were supplied by Cornell Cooperative Extension of Southold, NY.Approximately 30 clams from each cohort were placed in separate rigid, plastic mesh bags within 1 m × 1 m x 0.6 m wire aquaculture cages that were weighted and placed on the bay bottom.Shelves within cages allowed the bags to be placed horizontally and separately ∼0.1–0.5 m above the benthos.Triplicate cages were deployed at three sites from late spring to fall 2014.Water temperature and dissolved oxygen at each site were measured every 15 min throughout this period using in situ loggers.These deployments of juvenile hard clams match the deployments performed by Weiss et al. with regard to the locations, timing of deployment, clams per bag, and clams per cage.The juvenile clams used by Weiss et al., 11 mm when experiments began, were intermediate in size between the two cohorts used in 2014.A second experiment was initiated in the late summer and fall of 2014 with a third cohort of juvenile M. mercenaria.These juveniles were deployed in separate shelves of the same cages described for the first experiment.Triplicate cages were also deployed at a fourth site, near the New Inlet in Narrow Bay.Length from the posterior to anterior end of the shell was recorded for all juvenile clams biweekly in tandem with water column sampling.A mean initial length was determined by measuring fifty clams.To estimate initial mean dry tissue weight, one hundred clams were selected from the initial set and clam tissues were dried at 55-60 °C for at least one week before weighing.Identical to Weiss et al., the final dry tissue weight per clam per cage was determined by measuring dry tissue weights for all individuals within each cage at the end of the experiment.Cumulative absolute growth rate of clams was calculated by using the equation: GR =/, where GR is the absolute growth rate, L2 and L1 represent the final and initial, respectively, length or dry tissue weight and T2 and T1 represent the final and initial time, respectively.Water residence times of regions of GSB and Moriches Bay were determined using a salt balance approach that assessed the volumes of the bays, rates of freshwater flow, and the distribution of salinity across the estuarine region.The following two equations were used to determine residence time in days: tF =/R and f =/SO, where V equals the volume of the estuary, R equals the freshwater input, SO equals the salinity of the ocean water and S equals the salinity of the section of estuary.Salinities measured from 2000 to 2012 and 2013–2017 were used for before and after New Inlet formation comparison.Freshwater discharge into each lagoonal section studied was obtained from Misut and Monti.It was assumed that water parcels were flushed to the ocean inlet to which they were nearest, with the exception of regions of central GSB following the formation of the New Inlet given that the distribution of temperature and salinity suggested these regions were exchanging with the Fire Island Inlet."The extent to which water quality and plankton parameters were correlated with each other after the formation of New Inlet was evaluated via a Spearman's rank order correlation matrix.Differences in environmental conditions and the plankton community among sites and differences in the growth of hard clams among sites were evaluated via One-Way ANOVA using Tukey Honest Significant Difference post-hoc tests."Differences in water quality parameters measured before and after the formation of the New Inlet were assessed via Student's t-tests.A G-test of independence was used to assess differences in the frequency of events before and after the formation of the New Inlet.Step-wise multiple linear regression models of instantaneous growth rates of the small and large juvenile clams deployed from May through to November were constructed with water quality parameters using forward stepwise selection.All these analyses were performed using SigmaStat within SigmaPlot 11.0.Plankton community structures quantified by FlowCAM were compared using multiple response permutation procedure and indicator species analysis in PC-ORD v. 5.33.A significant increase in salinity was seen across all sites examined following formation of the New Inlet.The largest changes in salinity were to the east of the site of the New Inlet in the eastern portions of Bellport Bay and Narrow Bay; at these sites, salinity increased by as much as 20% from a mean of 24–28 in Bellport Bay.Sites further from the New Inlet also experienced increased salinity with the strength of the trend decreasing gradually with distance from the New Inlet and only ∼5% increases seen at the easternmost and westernmost sites examined.Following the formation of the New Inlet summer water temperatures were significantly lower in sites in close proximity to the New Inlet.At sites immediately north and east of the New Inlet, temperatures were, on average, 2 °C lower in summer, an 8% decrease.There were only minor variations in temperature elsewhere.Changes in summer dissolved oxygen levels varied across GSB following formation of the New Inlet.Sites located in close proximity to the New Inlet displayed significant increases in dissolved oxygen approaching 10% or ∼0.5 mg L−1, whereas this trend reversed abruptly at sampling sites further west and east.Changes in water clarity as measured via secchi disk depth displayed a spatial distribution similar to dissolved oxygen.Sites located close to the New Inlet had secchi disk depths ∼0.3 m deeper, representing a significant, 25% increase in water clarity.In contrast, there were significant declines in secchi disk depth at the far western and eastern sites examined, and minimal change in areas located slightly closer to the New Inlet that were not significant.Changes in summer chlorophyll a levels following the formation of the New Inlet differed between the eastern and western portions of GSB.The western part of the bay displayed significantly increased chlorophyll a with some variation between sites, while the trend reversed to the east with lower chlorophyll a in eastern Patchogue Bay and sites extending through Moriches Bay.At sites surrounding the New Inlet, chlorophyll a declined by as much as 5 μg L−1, a trend less spatially restricted than the increases in dissolved oxygen and water clarity.Changes in total nitrogen following the formation of the New Inlet were similar to changes in chlorophyll a.Sampling sites in the western portions of GSB displayed a slight but significant increase in total nitrogen following the formation of the New Inlet, while sites in the eastern portion of GSB and Moriches Bay showed a significant decrease.The strongest changes were again observed near to the New Inlet, where total nitrogen declined by as much as 0.14 mg L−1.Declines in dissolved nitrogen following the formation of the New Inlet extended further into the western portions of GSB than for total nitrogen.Furthermore, sites in the eastern portion of GSB and Moriches Bay showed significant declines in dissolved nitrogen, decreasing by as much as 0.09 mg L−1 at sites near to the New Inlet.During a continuous monitoring cruise across the south shore of Long Island in August of 2013, salinity was highest near the ocean inlets.Salinity was greater than 30 at the Fire Island Inlet, the New Inlet site, the Moriches Inlet, and the Shinnecock Inlet and generally lower than 28 between the inlets.Bay temperatures close to ocean inlets were <21 °C, whereas mid-bay regions had temperatures in excess of 23.5 °C.Dissolved oxygen was below 7 mg L−1 throughout most of Moriches Bay and Great South Bay during August 2013 but rose above 7 mg L−1 around the ocean inlets.Chlorophyll a concentrations across the south shore during August 2013 were lower near to all of the ocean inlet sites than in the middle sections of bays.Total concentrations of chlorophyll a, across all sites, were consistently low during the winter and spring months but were higher in summer and fall.In 2013 and 2015, chlorophyll a peaked in August and June, respectively, whereas in 2014, chlorophyll a peaked in early December.The concentrations of chlorophyll a at the Mid-Bay and Patchogue Bay sites were significantly higher than at other sites.Four distinct brown tides caused by the pelagophyte A. anophagefferens occurred from 2013 to 2015.The first occurred during the summer of 2013 when concentrations exceeding 4 × 105 cells mL−1 occurred from Fire Island Inlet, through Mid-Bay, and into Patchogue Bay with peak densities exceeding 106 cells mL−1 in Mid-Bay in July.For perspective, densities exceeding 4 × 104 cells mL−1 can be harmful to marine life.While the bloom collapsed in August, it returned to Mid-Bay and Patchogue Bay during September and October of 2013, peaking at ∼ 9 × 105 cells mL−1.The brown tide returned in the fall of 2014, persisting at >5 × 105 cells mL−1 from early October through to December at the Mid-Bay and Patchogue Bay sites and achieving lower levels for a shorter period of time at the Fire Island Inlet site.The final brown tide of this study period occurred during June and July of 2015 when a bloom initiated at Mid-Bay and Patchogue Bay and then spread to other locations although with lower cell densities.Over the entire study time and area, brown tide concentrations at Mid-Bay and Patchogue Bay were significantly higher than New Inlet and Moriches Bay, as were concentrations of 19′-butanoyloxyfuncoxanthin in Mid-Bay compared to Fire Island Inlet, New Inlet and Moriches Bay.Characterization of the plankton communities by FlowCAM also showed that the Mid-Bay and Patchogue Bay sites were similar to each other but statistically distinct from the other three sites with Mid-Bay and Patchogue Bay vs. other three sites), and that the plankton groups most responsible for the difference were 2–3 and 3 to 7 μm equivalent spherical diameter particles; Supplementary Table S2), which covers the size range of A. anophagefferens.Concentrations of pico-cyanobacteria followed a seasonal trend of low concentrations during the winter and spring months, increasing throughout summer and remaining high in the fall.Concentrations of cyanobacteria were significantly higher at Mid-Bay and Patchogue Bay than the other three sites, but not statistical different between the remaining sites.Lowest cell densities were consistently observed at the New Inlet site.As suggested by the seasonal patterns, temperature was positively correlated with abundance of both cyanobacteria and A. anophagefferens.In contrast, the abundances of pennate diatoms and dinoflagellates were inversely correlated with temperature.Dinoflagellate abundance was also inversely related to salinity and positively related to the abundance of cryptophytes as indicated by the concentration of alloxanthin.The abundance of A. anophagefferens was positively correlated with total chlorophyll a concentration, and both of those were inversely correlated with secchi disk depth.There were several differences in the plankton communities of GSB when comparing before and after the formation of the New Inlet.Total chlorophyll a concentrations were significantly higher and lower at Mid-Bay and the New Inlet, respectively, in 2013–15 than in 2004–05.Concentrations of cyanobacteria were significantly higher at the Fire Island Inlet, Mid-Bay, and the New Inlet in 2013–15 than in 2004–05.Zeaxanthin concentrations were also significantly higher at Fire Island Inlet and Mid-Bay but did not increase together with cyanobacterial abundance at the New Inlet and increased without an increase in cyanobacterial abundance at Moriches Bay.Concentrations of dinoflagellates were significantly lower at the Fire Island Inlet, Mid-Bay, New Inlet, and Moriches Bay in 2013–15 than in 2004–05 while the concentration of peridinin only declined significantly at New Inlet.Centric diatom concentrations were significantly higher at Fire Island Inlet, the New Inlet, and Moriches Bay in 2013–15 than in 2004–05, and pennate diatom concentrations were significantly higher at the Fire Island Inlet and Mid-Bay in 2013–15 than in 2004–05.The concentration of fucoxanthin increased with pennate diatom cell counts at Mid-Bay and with centric diatom cell counts Moriches Bay but not at Fire Island Inlet or the New Inlet.Abundances of cryptophytes, as indicated by concentrations of alloxanthin, were significantly lower at the New Inlet in 2013–15 than in 2004–05 but significantly higher in Moriches Bay in 2013–15 than in 2004–05.Abundances of chlorophytes, as indicated by concentrations of lutein, were significantly higher at the Fire Island Inlet, Mid-Bay, and Moriches Bay in 2013–15 than in 2004–05.There were significant changes in fecal coliform bacteria in GSB and Moriches Bay after the formation of the New Inlet, with levels significantly declining in the region near the New Inlet but remaining similar elsewhere, save for the region east of the Connetquot River.Specifically, areas north and east of the New Inlet such as Narrow Bay and southeast Bellport Bay displayed declines in mean fecal coliform bacteria levels by 44% 100 mL−1 to 14 CFU 100 mL−1) and ∼40%, respectively, after the formation of the New Inlet whereas areas west of the New Inlet and to the far east showed no spatially coherent changes.Declines were also seen in areas further west, but fecal coliform levels at most of these sites remained above acceptable shellfish harvest standards.During summer and fall 2014, small juvenile hard clams deployed at New Inlet achieved a significantly greater shell length and dry weight compared to individuals deployed at Mid-Bay and Fire Island Inlet.Mid-Bay clams had the smallest shell length and dry weight.The larger juvenile clams deployed at New Inlet achieved a significantly greater shell length and dry weight than clams in Mid-Bay, which also had the smallest gain in dry weight.For the fall 2014 experiment, juvenile hard clams deployed at Fire Island Inlet, New Inlet, and Narrow Bay reached similar final lengths and dry weights, while juvenile clams deployed within Mid-Bay were significantly smaller and lighter.When comparing growth rates based on length for both small and large juvenile hard clams at all sites in 2014 and 2005, rates were significantly higher in 2014 than in 2005.Regarding location, both small and large juvenile clams grew significantly faster in Fire Island Inlet and significantly slower in Mid-Bay in 2014 than in 2005.Small juvenile clams, but not larger juveniles, grew significantly faster in the New Inlet in 2014 compared to 2005.A number of aspects of water quality and plankton communities in eastern GSB and Moriches Bay were significantly different after the formation of the New Inlet, including significant decreases in algal biomass, summer water temperatures, fecal coliform bacteria, and nitrogen concentrations as well as significant increases in salinity, summer dissolved oxygen, and water clarity.At the same time, phytoplankton biomass and salinity increased in central GSB.Furthermore, small juvenile M. mercenaria grew significantly faster at the location near the New Inlet after its formation compared to before, while clams within the Mid-Bay location displayed slower growth compared to the New Inlet and compared to growth rates at the Mid-Bay location prior to the formation of the New Inlet.Collectively, these findings provide new insight regarding the effects of barrier island breaches on the ecological functioning of lagoonal estuaries.To assess the extent to which changes in environmental parameters were a function of more rapid flushing of GSB and Moriches Bay following the formation of the New Inlet, residence times were calculated using a salt balance approach.Following the formation of the New Inlet, all regions of GSB and Moriches Bay had shorter residence times.The most notable changes occurred within the regions closest to the New Inlet including Moriches Bay, Narrow Bay, and Bellport Bay where residence times decreased by 60–90%.In contrast, residence times in the middle regions of GSB changed the least, ≤ 20%.The greater decline in residence time of GSB West and Nicoll Bay could reflect the effects of the dredging of Fire Island Inlet that occurred in 2013.These changes were broadly consistent with the changes in water characteristics such as total nitrogen and chlorophyll a that displayed large and significant declines in Moriches Bay, Narrow Bay, and Bellport Bay but increases in other parts of GSB.Hence, while changing nutrient levels may alter levels of algal biomass and thus dissolved oxygen and water clarity, it seems likely that physical flushing had a dominant organizing effect on changes in water quality following the formation of the New Inlet.Consistent with this interpretation, there were significant correlations between the changes in dissolved nitrogen, total nitrogen, chlorophyll a, secchi disc depth, salinity, and summer temperatures and the percent change in residence times for sections of GSB and Moriches Bay following the formation of the New Inlet.Comparisons of measurements made before and after the formation of the New Inlet by both municipal monitoring programs and academic laboratories provided multiple examples of how the New Inlet altered numerous aspects of water quality in the GSB and Moriches Bay ecosystems.While significant increases in salinity were detected throughout all GSB and western Moriches Bay after the formation of the New Inlet, changes in other environmental parameters were more spatially restricted.For example, summer temperature and dissolved nitrogen were lower through the eastern half of GSB and all western Moriches Bay but were largely unchanged elsewhere.In contrast, chlorophyll a and total nitrogen decreased within Bellport Bay, Narrow Bay, and western Moriches Bay but increased in central and western GSB.Increases in water clarity and dissolved oxygen were limited to Bellport Bay and Narrow Bay.Changes in nitrogen, chlorophyll a, and dissolved oxygen are interdependent and influenced by concurrent biological processes: lower nitrogen availability may have restricted the accumulation of algal biomass which in turn would lead to lower bacterial productivity and thus decreased community respiration rates.The ∼2 °C decrease in summer temperatures in Bellport Bay would cause an increase in dissolved oxygen of approximately 0.3 mg L−1, but actual dissolved oxygen changes were larger than this, suggesting both factors contributed to the oxygen increase.Similarly, changes in other parameters were likely to be influenced by both physical processes associated with enhanced circulation from the New Inlet and biological processes directly and indirectly affected by the New Inlet.The changes in residence times for GSB and Moriches Bay as well as changes in water characteristics following the formation of the New Inlet indicate that the New Inlet was not flushing equally to the east and west but rather was primarily exchanging eastward towards the Moriches Inlet.For example, levels of nitrogen and chlorophyll a decreased in regions north and east of the New Inlet but increased to the west.This pattern of ocean exchange is consistent with at least two other ocean inlets on the south shore of Long Island, the Shinnecock Inlet and Jones Inlet, both of which strongly exchange and flush ocean water to the east, but significantly less so to the west.This asymmetrical ocean flushing of the New Inlet was not predicted and thus represented an unanticipated consequence.While the long-term monitoring data from SCDHS revealed an obvious decline in chlorophyll a concentrations in eastern GSB and western Moriches Bay as well as increases in central GSB, our detailed analyses of plankton communities from before and after formation of the New Inlet revealed how individual plankton populations have varied in space and over time.Compared to the sites located closer to ocean inlets, during 2013–2015 the Mid-Bay site had significantly higher levels of brown tide cells and their pigment 19′-butanoyloxyfucoxanthin, cyanobacteria and their pigment, the cryptophyte pigment alloxanthin, as well as peridinin, fucoxanthin, and the green algal pigment lutein.This is a substantial departure from the prior study of this region in 2004 and 2005 when the New Inlet region and the Mid-Bay region had nearly identical levels of total chlorophyll a, diatoms and their pigment fucoxanthin, cyanobacteria and their pigment, and the green algal pigment lutein, with greater peridinin and alloxanthin in Bellport Bay.After the formation of the New Inlet, cyanobacteria were more abundant and dinoflagellates less abundant throughout GSB, while other changes varied by location.The Fire Island Inlet and Mid-Bay sites displayed significantly more pennate diatoms, zeaxanthin, and the green algal pigment lutein, while the Fire Island Inlet and New Inlet sites both showed increases in centric diatoms.Uniquely, the site at the New Inlet had lower levels of alloxanthin and peridinin than had been measured before the breach, and unchanged concentrations of lutein and zeaxanthin.The increase in centric diatoms but decline in fucoxanthin at the New Inlet site could reflect changes in the diatom community composition or physiological state altering fucoxanthin content per cell, or declines in other phytoplankton that also contain fucoxanthin, such as chrysophytes.Similar factors may account for other mismatches between changes in cell abundance and pigment concentrations between the pre- and post-breach periods.Overall, the data suggest that the declines in algal biomass in Bellport Bay were driven by reductions in Aureococcus, dinoflagellates, cryptophytes, and possibly chrysophytes that offset smaller rises in Synechococcus and centric diatoms.Prior to the formation of the New Inlet, dinoflagellate densities within GSB were inversely correlated with salinity and positively correlated with alloxanthin and inorganic nitrogen.This statistical grouping was driven largely by Bellport Bay which, at the time, had the highest levels of dinoflagellates, peridinin and alloxanthin as well as inorganic nitrogen but the lowest salinity across the entire south shore of Long Island.After the formation of the New Inlet, dinoflagellate densities were still inversely correlated with salinity and positively correlated with concentrations of alloxanthin.Elevated levels of nutrients delivered from groundwater and streams probably supported the growth of dinoflagellates as well as cryptophytes which are known to be a source of prey for mixotrophic dinoflagellates.We conclude that the formation of the New Inlet enhanced ocean flushing in Bellport Bay which facilitated the large-scale decline of dinoflagellates by increasing salinity, decreasing nutrients, and decreasing the abundance of cryptophytes.Given that some dinoflagellates, including those that formerly dominated Bellport Bay, can form HABs and cause fish kills, this change represents a clear ecosystem benefit for Bellport Bay.For central GSB, the frequency of brown tides caused by A. anophagefferens from 2013 to 2018 was significantly greater than the frequency before the formation of the New Inlet.As a decrease in residence time throughout GSB was expected to reduce the likelihood of brown tides, this represents an unexpected negative association between the New Inlet and the GSB ecosystem.It is possible that the increased occurrence of brown tide in central GSB may not be a result of the New Inlet, but instead may be attributed to decadal trends in warming and increasing nitrogen loading to GSB which are making it more vulnerable to brown tides, and that this trend was partly alleviated in the Bellport Bay region by increased flushing through the New Inlet.Also, while our salt balance-based calculations suggest that even the central region of GSB experienced a decline in residence times since the formation of the New Inlet, true residence times in GSB are controlled by additional factors including winds and subtidal volume fluxes.It is also possible that a new circulation pattern that encourages ocean exchange of waters adjacent to inlets but leaves a more sluggish nodal point in the central GSB may encourage the formation and maintenance of brown tide.The declines in fecal coliform levels following the formation of the New Inlet in regions north and east of the inlet were consistent with the increased flushing effects observed in other environmental parameters.Given that ocean water has been found to have few, if any, fecal indicator bacteria, enhanced ocean flushing may be diluting fecal contamination in GSB and higher salinity may also increase decay rates of indicator bacteria.Prior to the formation of the New Inlet, the poorly flushed regions directly north and east of the New Inlet were uncertified or only seasonally certified for shellfish harvest.The reduction in coliform bacterial levels below the level at which shellfish harvest is allowed indicates the enhanced flushing from the New Inlet could lead to a reopening of these regions for shellfish harvesting.It is notable that on 24-August-2014, the largest 24-h rain event in the history of New York State occurred on Long Island with the region near Narrow Bay receiving >330 mm of rain.While many lagoonal regions on Long Island experienced spikes in fecal coliform bacteria following this event, levels were unchanged in Narrow Bay.With enhanced circulation from the New Inlet, it appears that this region is less vulnerable to contamination by indicator and potentially pathogenic bacteria.The growth and condition of hard clams in GSB in 2005 were found to be strongly influenced by temperature as well as food quality and quantity.In 2014, water temperature was again strongly correlated with instantaneous shell-based growth rate in juvenile clams.The optimal temperature range for hard clams is typically between 20 and 24 °C, while temperatures below or above can result in little to no growth.Consistent with this concept, the growth of large and small juvenile clams was strongly and positively correlated with cooler temperatures but inversely or poorly correlated with higher temperatures.Water temperatures during the experiments presented here were cooler by ∼1.2 °C than the 2000–2012 average for sites where hard clams were deployed, suggesting that the negative effects of higher summer temperature would have been relatively less evident in 2014.On average, the percent of days where water temperatures were above 25 °C decreased from ∼14% to <8% near the New Inlet site in the time before and after the formation of the New Inlet but remained relatively similar at the Mid-Bay site.Regionally, temperatures have increased by several degrees in recent decades and this rise is expected to continue in the coming decades.Hence, one key effect of the New Inlet is likely to be maintaining ambient water temperatures within an optimal range for maximal hard clam growth rates for a greater number of days during summer as well as in the fall when ocean temperatures exceed those of the more rapidly cooling bay, allowing clams to grow faster for longer.To model the influence of temperature and all other environmental factors measured concurrently on hard clam growth during the spring through fall 2014 period, step-wise, forward multiple linear regression models were made.For small juvenile clams, the best model explained 76% of the variance in growth rate and included temperature as well as the pigments 19′-butanoyloxyfucoxanthin, fucoxanthin, and lutein: Small juvenile growth rates = −0.0171 – + – +.For large juvenile clams, the best model explained 71% of the variance in growth rate and included temperature as well as the pigment 19′-butanoyloxyfucoxanthin: Large juvenile growth rates = −0.0167 – +.These models show that, consistent with data from 2005, phytoplankton food quality and quantity influenced hard clam growth rates in 2014.The pigment 19′-butanoyloxyfucoxanthin, negatively associated with the growth rate of both small and large clams in 2014, is found exclusively in the brown tide alga, A. anophagefferens which is well-known to be harmful to hard clams and other shellfish.The highest brown tide densities in 2014 were observed at the Mid-Bay station and juvenile clams at this location had a 10% rise in mortality during the brown tide and displayed the slowest growth of all regions.The impacts of brown tides on hard clams are dose-dependent, and so the faster growth of juvenile hard clams near the New Inlet in 2014 compared to 2005 suggests that the reduction of A. anophagefferens densities during brown tide in eastern GSB due to enhanced ocean flushing since the formation of the New Inlet has been sufficient to benefit hard clams.During brown tides in GSB densities of A. anophagefferens in Bellport Bay since the formation of the New Inlet have been significantly lower compared to the period before.Lutein was also present at greatest concentrations at the Mid-Bay site and is found primarily in green algae, some of which in the past have created HABs in GSB that were harmful to bivalves.For the small juvenile hard clams, the positive association of growth with fucoxanthin likely represents the influence of centric diatoms and chrysophytes, two classes of phytoplankton known to support robust growth in M. mercenaria.During 2005, juvenile clam growth rates in GSB were negatively correlated with dinoflagellates and dinoflagellates are associated with lowered food web production.Pico-cyanobacteria and pennate diatoms have also been shown to have detrimental effects on bivalves.There were no relationships between these groups and hard clam growth rates in 2014, suggesting either that they are weaker effects or that there was not sufficient variation in these parameters among the sites studied in 2014.Several of the physicochemical and biogeochemical changes documented here could have broad benefits to various aspects of the GSB-Moriches Bay ecosystem and are likely to be indicative of changes that may occur in other temperate lagoons that are breached by ocean water.For example, ocean flushing enhanced water clarity in eastern GSB and western Moriches Bay, a change likely to benefit the resident seagrass community that has greatly diminished since the early 1980s due to light limitation.Prior to the formation of the New Inlet, summer water temperatures in GSB had frequently risen above 25 °C, a level known to be stressful to Zostera marina and hard clams.Since then, summer temperatures near the New Inlet have been significantly lower due to enhanced ocean exchange."Since brown tides are also harmful to seagrasses, zooplankton, and other bivalves, the New Inlet's ability to mitigate brown tides may benefit the entire marine food web.Seasonal analyses revealed that changes in several water characteristics were most profound in spring and summer months, when improved water quality is most critical for marine resources.Early life stage finfish and shellfish that are spawned during late spring and early summer are generally more sensitive to stressors such as low dissolved oxygen.These seasons are also when coastal water quality problems arise, such as HABs and summer hypoxia and are also periods when coastal water bodies are used extensively for recreational and commercial purposes.Reducing the severity of these phenomena may reverse declining abundances in key fishery resource species such as hard clams and bay scallops.The lower levels of nitrogen and algal biomass within eastern GSB following the formation of the New Inlet represent a localized reversal of decadal trends of increasing nitrogen and HABs in New York coastal waters, which could have broad ecosystem benefits.However, this improvement has been limited in spatial extent and the intensification of algal blooms and increasing concentrations of total nitrogen in central GSB following the formation of the New Inlet have important managerial implications.Reduction of excessive catchment delivery of nitrogen may be needed to mitigate these HABs and improve water quality more broadly throughout GSB.Given the predicted climate change-induced intensification of hurricanes and cyclones this coming century, the occurrence of barrier island breaches such as the one studied here are likely to become more common in the future.The New Inlet in GSB created by Hurricane Sandy had a series of expected as well as unanticipated consequences for water quality, plankton communities, indicator bacteria, and juvenile hard clam growth.In locations immediately north and/or east of the New Inlet, bay residence times, summer water temperatures, total and dissolved nitrogen, chlorophyll a, and fecal coliform bacteria concentrations decreased, while salinity, dissolved oxygen, and water clarity increased.These changes are expected to improve the performance of resident seagrasses and bivalves, the latter of which was observed in experiments with increased juvenile hard clam growth near the New Inlet.In contrast, regions west of the New Inlet within the center of GSB experienced less change in residence times, increases in chlorophyll a, and harmful brown tides, and decreases in water clarity and summer dissolved oxygen levels.These changes, potentially reflecting altered circulation and/or ongoing anthropogenic impacts, could have negative consequences for vital resources including zooplankton, seagrasses, and bivalves, the latter of which was observed in the present study with decreased juvenile hard clam growth at a central GSB station when compared to prior studies there.Therefore, while new ocean inlets can provide localized ecosystem benefits, the results of this study suggest that within eutrophic ecosystems such as GSB, broader scale, watershed-based management initiatives are required to achieve whole ecosystem restoration.
Barrier island lagoons are the most common type of estuary in the world and can be prone to eutrophication as well as the formation and closure of ocean inlets via severe storm activity. This study describes the biological, chemical, and physical changes that occurred along the south shore of Long Island, NY, USA, following the formation of a new ocean inlet in eastern Great South Bay (GSB) by Hurricane Sandy in October of 2012. Time series sampling and experiments were performed at multiple locations within GSB and neighboring Moriches Bay from 2013 through to 2018. Historical comparisons to prior water quality monitoring data, fecal coliform concentrations, and hard clam growth rates were also made. Measurements indicated that the New Inlet provided asymmetrical ocean flushing. Within locations north (Bellport Bay) and east (Narrow Bay, western Moriches Bay) of the New Inlet, water residence times, summer water temperatures, total and dissolved nitrogen, chlorophyll a, the harmful brown tide alga, Aureococcus anophagefferens, pigments associated with diatoms and dinoflagellates (fucoxanthin and peridinin), and fecal coliform bacteria levels all significantly decreased, while salinity, dissolved oxygen, and water clarity significantly increased. In contrast, waters west of the New Inlet within the center of GSB experienced little change in residence times, significant increases in chlorophyll a and harmful brown tides caused by A. anophagefferens, as well as a significant decrease in water clarity and summer dissolved oxygen levels. Growth rates of juvenile hard clams (Mercenaria mercenaria) near the New Inlet increased compared to before the inlet and were significantly higher than in central GSB, where growth rates significantly declined compared to before the inlet. Hence, while enhanced ocean flushing provided a series of key ecosystem benefits for regions near the New Inlet, regions further away (> 10 km) experienced more frequent HABs and poorer performance of bivalves, demonstrating that enhanced ocean flushing provided by the breach was not adequate to fully restore the whole GSB ecosystem.
499
Mansonella perstans, Onchocerca volvulus and Strongyloides stercoralis infections in rural populations in central and southern Togo
Mansonella perstans, Onchocerca volvulus and Strongyloides stercoralis are widespread helminth parasites in the tropics.M. perstans is endemic in central and western Africa, as well as in Central and Latin America.The adult filariae of M. perstans are found in the connective tissue of the serous body cavities and the microfilariae which are released by gravid female filariae circulate in the peripheral blood.The transmission of M. perstans is by Culicoides spp., a cosmopolitan genus with species that transmit besides Mansonella spp. also viral and protozoan pathogens.Unclear symptoms of an infection and the lack of diagnosis lead to inaccurate epidemiological data on mansonelliasis, and the prevalence in endemic countries is variable due to bio-ecological zones.Onchocerca volvulus is taxonomically classified as Mansonella spp. into the nematode family of Onchocercidae.In the human host the adult filariae of O. volvulus reside in nodular fibrous granulomas formed as a result of the host immune defence.In those nodules female gravid O. volvulus release larvae, the microfilariae, which migrate in subcutaneous tissues causing dermal irritations, inflammation and skin lesions.When entering the eyes Mf will damage corneal and retinal tissues which ultimately may result in blindness after years of infection.In West Africa O. volvulus transmission and human infection occurs with the blood meal of black fly species of the Simulium damnosum s.l. genus which breed in fast-flowing rivers, hence the name river blindness.An estimated 37.2 million are infected with O. volvulus, about 1 million are blinded by onchocerciasis, and 99% of the infected live in Africa.Strongyloides stercoralis infection is initially by third-stage larvae which penetrate the skin and migrate through blood vessels and lungs to finally mature in the small intestine where gravid female worm release first-stage larvae.Those L1 larvae are either passed with the faeces, or L1 will auto-infect the human host.Such continuous auto-infections and parthenogenic reproduction of female S. stercoralis will establish in the human host persistent and chronic infections.Strongyloidiasis is found worldwide, with higher prevalence in the tropics and subtropics with an estimated 100 million infected.In Togo epidemiological evaluations on M. perstans are rare, surveys on O. volvulus are important especially under the aspect of mass drug administration of ivermectin which is performed since decades, and data on the prevalence of S. stercoralis infection are not available.We have accomplished immune-epidemiological and molecular surveys on the prevalence of M. perstans, O. volvulus and S. stercoralis in rural areas in central and southern Togo.Such evaluations provide specific data on the regional distribution of mansonelliasis, onchocerciasis and strongyloidiasis, and such mapping is indispensable for control efforts and interventions.The survey participants are resident in the villages of Tsokple, Kpati-Cope, Igbowou-Amou, Atinkpassa, Amouta and Tutu-Zionou in the Région Plateaux in Togo.The villages Sagbadai, Bouzalo and Kéméni are located in the Région Centrale.All villages are under surveillance of the Togolese National Onchocerciasis Control Program.The protocol of the study was reviewed and approved by the Togolese Bioethics Committee for Research in Health; Avis #015/2012/CBRS, Document #2804/2012/MS/CAB/DGS/DPLET/CBRS/16.November 2012, Document #013/2015/CBRS/3.Septembre 2015), and study authorization and approval was granted by the Ministry of Health of Togo.Consent from each study participant was documented and confirmed by signature, and consent for study participation by those <18 years of age was given verbally by each participant, and written consent and approval for participation for those <18 years was always obtained by the parents or the accompanying responsible adults.For correct and complete understanding explanations were always given in the local language.Before each follow up survey, approval was obtained from the appropriate regional and district-level health authorities.All specimens used in this study were collected from study participants who provided written informed consent.The minimum age of participants was ≥10 years in the Région Centrale and ≥5 years in the Région Plateaux.For the Région Plateaux, the dry blood spot sample collections were accompanied in parallel with the collection of skin biopsies to determine the prevalence of O. volvulus microfilariae.In the central region DBS were collected without skin biopsies.In the Région Centrale and Plateaux the village populations received since several years annually by means of mass drug administration ivermectin.In addition, the National Lymphatic Filariasis Elimination Program in Togo has extended the distribution system established by the onchocerciasis program through the co-administration of albendazole with ivermectin.The blood samples were collected by medical assistants and conserved as dry blood spots on protein saver cards type 903™.The samples were coded immediately, air-dried, sealed air tight and stored at +8 °C.The DBS samples served for DNA extraction as well as for antibody elution for ELISA purposes.With corneo scleral punches one skin biopsy was taken from the upper dermis on the left and right iliac crest of the hip to sum up for a total of two samples.Skin biopsies were immediately incubated in physiological saline solution on segmented glass slides and after 30 min the slides were examined under a microscope for O. volvulus microfilariae emerging from the biopsies and the Mf numbers were counted.From the protein cards completely dry blood circles were cut out with a punching scissor.From these dry blood circles the DNA was extracted by using the QIAamp DNA Mini Kit according to the recommended dry blood spot protocol.The detection of parasite DNA was carried out by real-time PCR using the PCR rotor gene cycler RG3000.The primers for the respective parasites are shown with their PCR conditions in Table 1.The primer and probe selection was accomplished by using the online software Primer3.The real-time PCR primer pairs, probes and test conditions used for the detection of Mansonella perstans were: Mp-primer-fwd 5′-CTGCGGAAGGATCATTAA-3′; Mp-primer rev 5′-TGCATGTTGCTAAATAAAAGTG-3′; Mp-probe 5′-FAM-CGAGCTTCCAAACAAATACATAATAAC-TAM-3′; The rtPCR conditions were 50 °C/2 min, 95 °C/10 min, × 45.For the OvAg-IgG4 ELISA an adult worm antigen extract from male and female O. volvulus was used.The preparation of the O. volvulus antigen was accomplished as previously described by Mai et al., 2007 and Lechner et al., 2012.Briefly, adult worms were isolated as described and then washed in phosphate-buffered saline, transferred into a Ten-Broek tissue grinder and then homogenized extensively on ice.The homogenate was then sonicated twice for 3 min on ice and centrifuged at 16,000g for 30 min at 4 °C.The S. stercoralis antigen was prepared from third-stage larvae of S. stercoralis which were kindly made available by Prof. James B. Lok.The L3 larvae were pooled in 2 ml of PBS, homogenized on ice for 30 min with a Ten-Boek tissue grinder, and then pulse ultra-sonicated on ice for 10 min.This homogenate was then centrifuged for 15 min at 16,000g, the supernatant collected and the protein concentration of this PBS-soluble S. stercoralis L3 antigen extract determined.The OvAg and SsL3Ag preparations were further used for the antigen-specific IgG4 ELISAs.From the protein saver cards blood circles were cut out with a punching device.The elution of antibodies from the cards was with 200 μl of PBS containing 0.05% Tween20 and 5% bovine serum albumin for 2 days at 4 °C in Nunc 96 DeepWell polypropylene plates.For the OvAg-IgG4 ELISA an adult worm antigen extract from male and female O. volvulus was applied to measure serological IgG4 responses.The sensitivity of the IgG4-directed O. volvulus adult worm antigen-specific ELISA was determined with a contingency analysis and calculated against the results of skin biopsies.The calculated sensitivity of the IgG4-OvAg-ELISA was 93.1%.Micro titer plates were coated with OvAg or SsL3Ag in PBS pH 7.4 overnight, then the coating antigen solutions were discarded, and plates were blocked with PBS-Tween20® containing 5% fetal bovine serum at room temperature for 1.5 h. Thereafter, plates were washed with PBS-Tween20®, eluted blood samples were added without dilution and plates incubated at 37 °C for 2 h.After using PBS-Tween20® to wash an anti-human IgG4, horse radish peroxidase conjugated monoclonal antibody was added for 1.5 h, then plates were washed as above and TMB substrate was added.Plates were incubated at room temperature for 15 min and the reaction was stopped with 50 μl of 0.5 M sulfuric acid, and optical densities were measured at 450 nm with a micro plate reader.The collected data were analyzed using statistic software SAS JMP 11.1.1.The IgG4 responses of samples were determined in duplicate.First, from the brut ODs the background values were subtracted to obtain the net OD.The negative/positive threshold values were determined for each plate by calculating the mean IgG4 ODs of 10 samples from O. volvulus Mf-negative participants and then 4× of their standard deviation was added to the MW-NEG-OD.The sensitivity of the OvAg-specific IgG4 ELISA was determined with a contingency analysis; here, the OvAg-specific IgG4 responses and the O. volvulus Mf-positive and negative skin biopsy results were applied to determine the sensitivity of the ELISA.The negative/positive threshold values for the OvAg-IgG4 and the SsL3Ag-specific IgG4 responses from the Région Central were calculated as described above; i.e., for each ELISA plate the mean values of the lowest IgG4 ODs of 8 blood samples were determined and then 4× of their standard deviation was added.In Table 1 the detection of M. perstans DNA in dry blood samples by means of parasite-specific real-time PCR is detailed by region, village, gender and age groups; the number of DNA-positive samples and % positivity is shown.The overall prevalence for M. perstans DNA positivity was 14.9%.The difference between the Région Centrale and Région Plateaux is particularly noticeable.Men were more often positive than women.The age group under 15 years was less often affected than the age group between 16 and 25 years and the 26–35 years age group.The lowest prevalence for M. perstans was found in the village of Kéméni/Région Centrale with 0.5%, and the highest was in Atinkpassa/Région Plateaux with 34.9%.In the M. perstans-positive patients the ct-values of the parasite-specific rt-PCR decreased with increasing age which indicated that M. perstans DNA-levels steadily enhanced over age.Skin biopsies were collected from the participants in the Region Plateaux, and 5,3% of them were positive for microfilaria of O. volvulus.In Tsokle 12.4%, Igbowou-Amou 6.9%, Kpati Cope 6.7%, Atinkpassa 3.4%, Tutu Zionou 2.5% and in Amouta 0% were positive for mf of O. volvulus.The sensitivity of the IgG4-directed O. volvulus adult worm antigen-specific ELISA against the results of skin biopsies was evaluated, and the calculated sensitivity of the IgG4-directed OvAg-specific ELISA was 93.1%.In Table 2 the OvAg-specific IgG4 responses are shown by region, village and separated by gender and age groups.The overall prevalence of IgG4 sero-positivity was 59.0% for OvAg.The inhabitants from villages in the Région Plateaux showed with 75.4% significantly more positive reactions than the village inhabitants from the Région Centrale.IgG4 responses to OvAg in men were more often positive than in women.Between the two youngest age groups IgG4 positivity did not differ."In the older age groups, positive response were significant higher in the 25–35-year-olds and also in those above 35 years when compared with the youngest age group.The lowest sero-positivity with 32.1% was detected in the village of Kéméni/Région Central and the highest was with 85.2% in Amouta/Région Plateaux.The OvAg-specific IgG4 responses heightened with increasing age indicating persisting O. volvulus infections and suggesting an accumulation of parasites over age.In Table 3 the IgG4 responses to SsL3Ag are shown separated by region, village, gender and age groups.The overall prevalence of sero-positivity was 64.5% for S. stercoralis.In the Région Plateaux positive IgG4 responses with 71.2% to SsL3Ag were observed more often than in the Région Centrale with 55.5%.Positive IgG4 responses were similar in men and women.In the two youngest age groups the sero-positivity did not differ.IgG4 responses to SsL3Ag in the 26–35 years group and in those above 35 years were found more often than in those under 15 years of age.The lowest sero-prevalence was observed in the village of Kéméni/Région Centrale and the highest in Atinkpassa/Région Plateaux.The SsL3Ag-specific IgG4 responses did not correlate with the age of the participants but they correlated positive with the OvAg-specific IgG4 reactivity.In Table 4 single, double and triple infections with Strongyloides stercoralis, Onchocerca volvulus and Mansonella perstans in the study participants are shown.Singly positive for M. perstans, O. volvulus or S. stercoralis were 2.1%, 7.0% and 13.9% of the participants, respectively.Doubly positive for O. volvulus and M. perstans were 1.2%, for S. stercoralis and M. perstans 1.8%, and for O. volvulus and S. stercoralis 35.4% were detected IgG4 positive.Of note, positive IgG4 responses to SsL3Ag and DNA-positive for M. perstans were 15.6% of the participants while being negative for IgG4 to O. volvulus antigen.Triply positive for M.p., O.v. and S.s. were 9.9%.In 28.8% of the participants neither positive IgG4 responses to OvAg and SsL3Ag nor M. perstans-specific DNA were found.In all surveyed villages M. perstans DNA was detected in dry blood samples, and prevalence ranged from 0.5% to 34.9%.Such local differences are known from mansonelliasis endemic areas and attributed to fluctuating environmental factors which influence the parasite and vector populations.The surveyed villages in the Régions Plateaux and Central do not differ in altitude, climate and vegetation, and they are located in the arboreal savannah where agricultural activities dominate the activities of the village population.The higher mansonelliasis prevalence in males corresponds with earlier observations where a more intense exposure of man to infective Culicoides spp. vectors was mentioned as a possible cause, but also physiological parameters were suggested.The raise in prevalence with increasing age corresponded with previous studies; the M. perstans Mf positive cases were older than the Mf negative, and the lowering ct-values in our rt-PCR with increasing age indicated that M. perstans Mf-levels steadily enhanced.Thus, repetitive exposure to M. perstans may not elicit protective immunity, and M. perstans infections will steadily accumulate.For the detection of M. perstans we have chosen real-time PCR because M. perstans-specific antigens are not available.The M. perstans-specific rtPCR will detect blood-circulating microfilaria, and it will not respond to O. volvulus where microfilaria is skin-dwelling.The diagnostic approach by rtPCR allows the investigation of large sample numbers and DNA detection by PCR is more sensitive than by microscopy.Blood samples from mansonelliasis patients may contain few but also high Mf numbers per millilitre and with the small volumes of blood collected on filter papers patients with low Mf densities may falsely have been declared infection-free.In addition, real-time PCR facilitates monitoring of intervention programs and allows species-specific detection of treatment failure following rounds of mass treatment.In Togo, the Région Centrale and Plateaux were included into the control activities of the Onchocerciasis Control Program in 1987 and in both areas the mass drug administration of ivermectin is applied since 1989 until today.In both regions the O. volvulus microfilarial prevalence declined markedly, and onchocerciasis is considered as close to elimination.Onchocerciasis disease has largely been controlled with MDA of ivermectin, and for the treatment of lymphatic filariasis the repeated administration of ivermectin together with albendazole is recommended.While annually repeated ivermectin together with albendazole may have interrupted W. bancrofti parasite transmission in several districts in Togo, the drug combination will not efficaciously eliminate M. perstans microfilariae and mansonelliasis will persist.Our serological ELISA to detect active O. volvulus infections was sensitive with 93%, and the antigen-specific IgG4 reactivity to OvAg can reflect the status and history of infection.In previous works we have applied the OvAg-IgG4-ELISA in surveys with onchocerciasis patients at distinct states of O. volvulus infection.IgG4 responses were studied in O. volvulus microfilariae-positive patients, in onchocerciasis patients with a post-patent O. volvulus infection and in infection-free endemic controls.OvAg-specific IgG4 responses were highest in Mf-positive patients and diminished with the post-patent state of O. volvulus infection, but IgG4 responses remained elevated in post-patent cases when compared with Mf-negative endemic controls.In onchocerciasis patients who have received repeatedly ivermectin therapy for 16 years we found that IgG4 responses lessened only moderately in Mf-negative ivermectin treated patients at 16 years post initial ivermectin, and their IgG4 reactivity remained significantly higher than in O. volvulus infection-free endemic controls.With an occult and expiring O. volvulus infection, IgG4 responses in onchocerciasis patients were similar as observed in endemic controls, and significantly lower than in Mf-positive cases.Those Mf-negative onchocerciasis-occult patients have previously been Mf-positive and became Mf-negative without having ever received treatment with ivermectin.This suggested that their infection expired gradually as their adult O. volvulus exceeded their natural life span.Antibody responses to the O. volvulus adult worm extract and the recombinant antigen Ov16 were studied in Togo earlier, and the IgG4 sero-prevalence was 59% and 34%, respectively.In Togo, the mass drug administration of ivermectin has reduced the O. volvulus microfilarial prevalence markedly, but in the northern and central river basins the transmission of O. volvulus has never been interrupted completely.O. volvulus infections still persisted in children and also in adults, and positive IgG4 responses indicated active O. volvulus infection.Ivermectin may have a partial in vivo effect against infective third-stage larvae of O. volvulus but it has no effect against later larval stages and the standard dose of 150 μg/kg does not kill the adult O. volvulus and not disrupt embryogenesis or spermatogenesis.These observations have been confirmed recently, and onchocerciasis may persist at meso-endemic levels despite of >15 years of MDA with ivermectin.All three villages in the Region Centrale are situated in the river basin of Mô, and O. volvulus DNA was detected in Simulium damnosum s.l. in 2016 suggesting that parasite transmission continues.With an ongoing low level parasite transmission, the endemic population will be exposed to trickle infections with L3 of O. volvulus and such exposure will stimulate specific antibody responses.In all tested age and gender groups the positive O. volvulus IgG4 responses and M. perstans real-time PCR results were similar.The infection prevalence was slightly higher in males, positive responses enhanced with age and were higher in the Région Plateaux than Région Central suggesting that in the surveyed villages of the Plateaux region, despite repeated MDA with ivermectin and albendazole, favourable conditions for parasite transmission prevailed.Positive IgG4 responses to S. stercoralis infective third-stage-larvae were detected in up to 80% of the participants.The ELISA we applied is based on in vitro cultured third-stage larvae of S. strongyloides which were used as the defined antigen for detection of IgG4 antibodies, but the accuracy of our in-house ELISA might not be the same as found for commercial tests.The diagnostic accuracy of five serologic tests for S. stercoralis infection was evaluated by Bisoffi et al., and their works showed that the detection of total IgG against somatic antigens of S. stercoralis larvae was sensitive with 88–97% and 96–99% specific.The Strongyloides ratti somatic antigen Bordier-ELISA showed 86–96% sensitivity and 91–97% specificity, the NIE-ELISA against recombinant L3 antigen 63–79% and 88–94%, the NIE-luciferase immuno-precipitation system 78–90% and 99–100%, and immune fluorescence against intact S. stercoralis filariform larvae was 91–99% sensitive and 83–91% specific.The accuracy and sensitivity of the S. stercoralis larval antigen-ELISA is amongst the highest, IgG4 subclass application may improve diagnosis, and serology directed against larvae of S. stercoralis was found to be the best screening method in a S. stercoralis non-endemic setting.However, when using the S. stercoralis larvae somatic antigen ELISA cross reactions with other parasites were observed for Loa loa, Wuchereria bancrofti, Onchocerca volvulus, Schistosoma spp., and it was highest with 40% for hookworm and Trichinella spp. co-infections.For sero-diagnosis of S. stercoralis the highest specific test is a luciferase immunoprecipitation system that employs a recombinant antigen, but this assay is not widely available.We are aware that S. stercoralis-specific PCR on stool samples will not demonstrate a particularly higher sensitivity in comparison to other coprological techniques, such as Baermann method and agar plate culture.PCR would be appealing for its high specificity while its low sensitivity, when compared with serological tests, may be due to the irregular larval output observed in chronic strongyloidiasis.PCR in combination with serology may increase the accuracy of epidemiological surveys on S. stercoralis infection prevalence.The positive IgG4 responses to S. stercoralis infective third-stage-larvae in our study may be due to continuous exposure and re-infection, this may explain the differences between the two oldest and the youngest age group, but a continuous rise of IgG4 to SsL3Ag over age was not observed.Singly positive for SsL3Ag were 13.9%, and 15.7% of the participants tested IgG4-positive for SsL3Ag and also DNA-positive for M. perstans while being negative for OvAg-specific IgG4.Obvious again were the large differences between the regions for which regional differences of transmission of soil transmitted helminth parasites and also of O. volvulus may account.Furthermore, the SsL3Ag-specific IgG4 responses may either link with the intensity of infection, as described for schistosomiasis, but may also reflect repeated S. stercoralis auto-infection with L1 larvae which may stimulate specific antibody production; this may occur even several years after exposure, and strongyloidiasis may persist lifelong.Repeated ivermectin and albendazole distribution in our study area will effect on prevalence of S. stercoralis larvae in stool samples and on serology.Strongyloidiasis appears particularly overlooked, although endemic in many countries, and large-scale preventive chemotherapy with ivermectin significantly reduced and maintained low the prevalence of S. strongyloides in Ecuador and Tanzania.Repeated follow ups of populations who receive MDA with ivermectin and albendazole may show to which extent serological conversion to S. stercoralis may occur, and our present surveys on S. stercoralis were the first ones conducted in that regions.Mansonelliasis, onchocerciasis and strongyloidiasis remain prevalent in the surveyed regions, yet with local differences, and their levels of infection have been affected to a certain degree by MDA with ivermectin and albendazole.Our observations suggest that transmission of M. perstans, O. volvulus and S. stercoralis may be ongoing.The degree of positive test results in the examined rural communities advocate for the continuation of MDA with ivermectin and albendazole and for bi-annual therapies which should further reduce helminth infection levels and interrupt parasite transmission.Further extended regional surveys are recommendable which specifically address the morbidity and the progress towards elimination of M. perstans, O. volvulus and S. stercoralis infections.
Background: Mansonella perstans, Onchocerca volvulus and Strongyloides stercoralis are widespread helminth parasites in the tropics. Their distribution remains difficult to determine as it may change during national disease control programs and with regional mass drug administration (MDA). Epidemiological surveys are of importance to evaluate the geographical distribution of these helminth parasites and the diseases they may cause, however, up to date epidemiological evaluations on M. perstans and S. stercoralis in Togo are rare, and surveys on O. volvulus are important especially under the aspect of MDA of ivermectin which is performed since decades. Methods: Dry blood samples (n = 924) were collected from rural populations in the Régions Central and Plateaux in Togo, and analyzed by parasite-specific real-time PCR and ELISA techniques. Results: Dry blood samples from 733 persons where investigated by real-time PCR tested for DNA of blood-circulating M. perstans microfilaria, and a prevalence of 14.9% was detected. Distinct differences were observed between genders, positivity was higher in men increasing with age, and prevalence was highest in the Région Plateaux in Togo. IgG4 responses to O. volvulus antigen (OvAg) were studied in 924 persons and 59% were found positive. The distribution of parasite infestation between age and gender groups was higher in men increasing with age, and regional differences were detected being highest in the Région Plateaux. The diagnostic approach disclosed 64,5% positive IgG4 responses to S. stercoralis infective third-stage larvae-specific antigen (SsL3Ag) in the surveyed regions. Antigen cross reactivity of SsL3Ag with parasite co-infections may limit the calculated prevalence. Singly IgG4 positive for SsL3Ag were 13.9%, doubly positive for OvAg and SsL3Ag were 35.5% and triply positive for M. perstans, O. volvulus and S. stercoralis were 9.9%. Conclusions: Mansonelliasis, onchocerciasis and strongyloidiasis remain prevalent in the surveyed regions, yet with local differences. Our observations suggest that transmission of M. perstans, O. volvulus and S. stercoralis may be ongoing. The degree of positive test results in the examined rural communities advocate for the continuation of MDA with ivermectin and albendazole, and further investigations should address the intensity of transmission of these parasites.